923 resultados para Pre-tensioning Structural Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

SFTI-1 is a small cyclic peptide from sunflower seeds that is one of the most potent trypsin inhibitors of any naturally occurring peptide and is related to the Bowman-Birk family of inhibitors (BBIs). BBIs are involved in the defense mechanisms of plants and also have potential as cancer chemopreventive agents. At only 14 amino acids in size, SFTI-1 is thought to be a highly optimized scaffold of the BBI active site region, and thus it is of interest to examine its important structural and functional features. In this study, a suite of 12 alanine mutants of SFTI-1 has been synthesized, and their structures and activities have been determined. SFTI-1 incorporates a binding loop that is clasped together with a disulfide bond and a secondary peptide loop making up the circular backbone. We show here that the secondary loop stabilizes the binding loop to the consequences of sequence variations. In particular, full-length BBIs have a conserved cis-proline that has been shown previously to be required for well defined structure and potent activity, but we show here that the SFTI-1 scaffold can accommodate mutation of this residue and still have a well defined native-like conformation and nanomolar activity in inhibiting trypsin. Among the Ala mutants, the most significant structural perturbation occurred when Asp(14) was mutated, and it appears that this residue is important in stabilizing the trans peptide bond preceding Pro(13) and is thus a key residue in maintaining the highly constrained structure of SFTI-1. This aspartic acid residue is thought to be involved in the cyclization mechanism associated with excision of SFTI-1 from its 58-amino acid precursor. Overall, this mutational analysis of SFTI-1 clearly defines the optimized nature of the SFTI-1 scaffold and demonstrates the importance of the secondary loop in maintaining the active conformation of the binding loop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a machine learning model that predicts a structural disruption score from a protein’s primary structure. SCHEMA was introduced by Frances Arnold and colleagues as a method for determining putative recombination sites of a protein on the basis of the full (PDB) description of its structure. The present method provides an alternative to SCHEMA that is able to determine the same score from sequence data only. Circumventing the need for resolving the full structure enables the exploration of yet unresolved and even hypothetical sequences for protein design efforts. Deriving the SCHEMA score from a primary structure is achieved using a two step approach: first predicting a secondary structure from the sequence and then predicting the SCHEMA score from the predicted secondary structure. The correlation coefficient for the prediction is 0.88 and indicates the feasibility of replacing SCHEMA with little loss of precision. ©2005 IEEE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to investigate how a community of practice focused on learning to teach secondary mathematics was created and sustained by pre-service and beginning teachers. Bulletin board discussions of one pre-service cohort are analysed in terms of Wenger’s (1998) three defining features of a community of practice: mutual engagement, joint enterprise, and a shared repertoire. The study shows that the emergent design of the community contributed to its sustainability in allowing the pre-service teachers to define their own professional goals and values. Sustainability was also related to how the participants expanded, transformed, and maintained the community during the pre-service program and after graduation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eddy currents induced within a magnetic resonance imaging (MRI) cryostat bore during pulsing of gradient coils can be applied constructively together with the gradient currents that generate them, to obtain good quality gradient uniformities within a specified imaging volume over time. This can be achieved by simultaneously optimizing the spatial distribution and temporal pre-emphasis of the gradient coil current, to account for the spatial and temporal variation of the secondary magnetic fields due to the induced eddy currents. This method allows the tailored design of gradient coil/magnet configurations and consequent engineering trade-offs. To compute the transient eddy currents within a realistic cryostat vessel, a low-frequency finite-difference time-domain (FDTD) method using total-field scattered-field (TFSF) scheme has been performed and validated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eukaryotic membrane proteins cannot be produced in a reliable manner for structural analysis. Consequently, researchers still rely on trial-and-error approaches, which most often yield insufficient amounts. This means that membrane protein production is recognized by biologists as the primary bottleneck in contemporary structural genomics programs. Here, we describe a study to examine the reasons for successes and failures in recombinant membrane protein production in yeast, at the level of the host cell, by systematically quantifying cultures in high-performance bioreactors under tightlydefined growth regimes. Our data show that the most rapid growth conditions of those chosen are not the optimal production conditions. Furthermore, the growth phase at which the cells are harvested is critical: We show that it is crucial to grow cells under tightly-controlled conditions and to harvest them prior to glucose exhaustion, just before the diauxic shift. The differences in membrane protein yields that we observe under different culture conditions are not reflected in corresponding changes in mRNA levels of FPS1, but rather can be related to the differential expression of genes involved in membrane protein secretion and yeast cellular physiology. Copyright © 2005 The Protein Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the effect of firm size and foreign operations on the exchange rate exposure of UK non-financial companies from January 1981 to December 2001. Design/methodology/approach – The impact of the unexpected changes in exchange rates on firms’ stock returns is examined. In addition, the movements in bilateral, equally weighted (EQW) and trade-weighted and exchange rate indices are considered. The sample is classified according to firm size and the extent of firms’ foreign operations. In addition, structural changes on the relationship between exchange rate changes and individual firms’ stock returns are examined over three sub-periods: before joining the exchange rate mechanism (pre-ERM), during joining the ERM (in-ERM), and after departure from the ERM (post-ERM). Findings – The findings indicate that a higher percentage of UK firms are exposed to contemporaneous exchange rate changes than those reported in previous studies. UK firms’ stock returns are more affected by changes in the EQW, and US$ European currency unit exchange rate, and respond less significantly to the basket of 20 countries’ currencies relative to the UK pound exchange rate. It is found that exchange rate exposure has a more significant impact on stock returns of the large firms compared with the small and medium-sized companies. The evidence is consistent across all specifications using different exchange rate. The results provide evidence that the proportion of significant foreign exchange rate exposure is higher for firms which generate a higher percentage of revenues from abroad. The sensitivities of firms’ stock returns to exchange rate fluctuations are most evident in the pre-ERM and post-ERM periods. Practical implications – This study provides important implications for public policymakers, financial managers and investors on how common stock returns of various sectors react to exchange rate fluctuations. Originality/value – The empirical evidence supports the view that UK firms’ stock returns are affected by foreign exchange rate exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The data used in this study is for the period 1980-2000. Almost midway through this period (in 1992), the Kenyan government liberalized the sugar industry and the role of the market increased, while the government's role with respect to control of prices, imports and other aspects in the sector declined. This exposed the local sugar manufacturers to external competition from other sugar producers, especially from the COMESA region. This study aims to find whether there were any changes in efficiency of production between the two periods (pre and post-liberalization). Design/methodology/approach – The study utilized two methodologies to efficiency estimation: data envelopment analysis (DEA) and the stochastic frontier. DEA uses mathematical programming techniques and does not impose any functional form on the data. However, it attributes all deviation from the mean function to inefficiencies. The stochastic frontier utilizes econometric techniques. Findings – The test for structural differences in the two periods does not show any statistically significant differences between the two periods. However, both methodologies show a decline in efficiency levels from 1992, with the lowest period experienced in 1998. From then on, efficiency levels began to increase. Originality/value – To the best of the authors' knowledge, this is the first paper to use both methodologies in the sugar industry in Kenya. It is shown that in industries where the noise (error) term is minimal (such as manufacturing), the DEA and stochastic frontier give similar results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuberculosis is one of the most devastating diseases in the world primarily due to several decades of neglect and an emergence of multidrug-resitance strains (MDR) of M. tuberculosis together with the increased incidence of disseminated infections produced by other mycobacterium in AIDS patients. This has prompted the search for new antimycobacterial drugs. A series of pyridine-2-, pyridine-3-, pyridine-4-, pyrazine and quinoline-2-carboxamidrazone derivatives and new classes of carboxamidrazone were prepared in an automated fashion and by traditional synthesis. Over nine hundred synthesized compounds were screened for their anti mycobacterial activity against M. fortutium (NGTG 10394) as a surrogate for M. tuberculosis. The new classes of amidrazones were also screened against tuberculosis H37 Rv and antimicrobial activities against various bacteria. Fifteen tested compounds were found to provide 90-100% inhibition of mycobacterium growth of M. tuberculosis H37 Rv in the primary screen at 6.25 μg mL-1. The most active compound in the carboxamidrazone amide series had an MIG value of 0.1-2 μg mL-1 against M. fortutium. The enzyme dihydrofolate reductase (DHFR) has been a drug-design target for decades. Blocking of the enzymatic activity of DHFR is a key element in the treatment of many diseases, including cancer, bacterial and protozoal infection. The x-ray structure of DHFR from M. tuberculosis and human DHFR were found to have differences in substrate binding site. The presence of glycerol molecule in the Xray structure from M. tuberculosis DHFR provided opportunity to design new antifolates. The new antifolates described herein were designed to retain the pharmcophore of pyrimethamine (2,4- diamino-5(4-chlorophenyl)-6-ethylpyrimidine), but encompassing a range of polar groups that might interact with the M. tuberculosis DHFR glycerol binding pockets. Finally, the research described in this thesis contributes to the preparation of molecularly imprinted polymers for the recognition of 2,4-diaminopyrimidine for the binding the target. The formation of hydrogen bonding between the model functional monomer 5-(4-tert-butyl-benzylidene)-pyrimidine-2,4,6-trione and 2,4-diaminopyrimidine in the pre-polymerisation stage was verified by 1H-NMR studies. Having proven that 2,4-diaminopyrimidine interacts strongly with the model 5-(4-tert-butylbenzylidene)- pyrimidine-2,4,6-trione, 2,4-diaminopyrimidine-imprinted polymers were prepared using a novel cyclobarbital derived functional monomer, acrylic acid 4-(2,4,6-trioxo-tetrahydro-pyrimidin-5- ylidenemethyl)phenyl ester, capable of multiple hydrogen bond formation with the 2,4- diaminopyrimidine. The recognition property of the respective polymers toward the template and other test compounds was evaluated by fluorescence. The results demonstrate that the polymers showed dose dependent enhancement of fluorescence emissions. In addition, the results also indicate that synthesized MIPs have higher 2,4-diaminopyrimidine binding ability as compared with corresponding non-imprinting polymers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of shallow fluidized bed boilers is defined and a preliminary working design for a gas-fired package boiler has been produced. Those areas of the design requiring further study have been specified. Experimental investigations concerning these areas have been carried out. A two-dimensional, conducting paper analog has been developed for the specific purpose of evaluating sheet fins. The analog has been generalised and is presented as a simple means of simulating the general, two-dimensional Helmholtz equation. By recording the transient response of spherical, calorimetric probes when plunged into heated air-fluidized beds, heat transfer coefficients have been measured at bed temperatures up to 1 100°C. A correlation fitting all the data to within ±10% has been obtained. A model of heat transfer to surfaces immersed in high temperature beds has been proposed. The model solutions are, however, only in qualitative agreement with the experimental data. A simple experimental investigation has revealed that the effective, radial, thermal conductivities of shallow fluidized beds are an order of magnitude lower than the axial conductivities. These must, consequently, be taken into account when considering heat transfer to surfaces immersed within fluidized beds. Preliminary work on pre-mixed gas combustion and some further qualitative experiments have been used as the basis for discussing the feasibility of combusting heavy fuel oils within shallow beds. The use of binary beds, within which the fuel could be both gasified and subsequently burnt, is proposed. Finally, the consequences of the experimental studies on the initial design are considered, and suggestions for further work are made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis and prediction of the dynamic behaviour of s7ructural components plays an important role in modern engineering design. :n this work, the so-called "mixed" finite element models based on Reissnen's variational principle are applied to the solution of free and forced vibration problems, for beam and :late structures. The mixed beam models are obtained by using elements of various shape functions ranging from simple linear to complex cubic and quadratic functions. The elements were in general capable of predicting the natural frequencies and dynamic responses with good accuracy. An isoparametric quadrilateral element with 8-nodes was developed for application to thin plate problems. The element has 32 degrees of freedom (one deflection, two bending and one twisting moment per node) which is suitable for discretization of plates with arbitrary geometry. A linear isoparametric element and two non-conforming displacement elements (4-node and 8-node quadrilateral) were extended to the solution of dynamic problems. An auto-mesh generation program was used to facilitate the preparation of input data required by the 8-node quadrilateral elements of mixed and displacement type. Numerical examples were solved using both the mixed beam and plate elements for predicting a structure's natural frequencies and dynamic response to a variety of forcing functions. The solutions were compared with the available analytical and displacement model solutions. The mixed elements developed have been found to have significant advantages over the conventional displacement elements in the solution of plate type problems. A dramatic saving in computational time is possible without any loss in solution accuracy. With beam type problems, there appears to be no significant advantages in using mixed models.