980 resultados para Reproducing Transformation Method
Resumo:
We describe a new rapid and efficient polymerase chain reaction (PCR)-based site-directed mutagenesis method. This procedure is effective with any plasmid and it employs four oligonucleotide primers. One primer contains the desired mutation, the second is oriented in the opposite direction (one of these two primers should be phosphorylated), and the third and fourth should be coding in complementary fashion for a unique restriction site to be introduced in a nonessential region. The method consists of two simultaneous PCR reactions; the PCR products are digested with the enzyme that recognizes the newly introduced unique restriction site and then ligased and used to transform competent bacteria. Additionally, the use of Dpn I facilitates the elimination of template DNA. The newly introduced restriction site is essential for ligation in the correct orientation of the two-PCR products and is further used for mutant screening. Resulting plasmids carry both the new restriction site and the desired mutation. Using this method, more than 20 mutants have already been generated (using two different kinds of templates); all these mutants were sequenced for the desired mutation and transfected into AtT-20 cells and the expressed mutant proteins encoded by the vector were assayed.
Resumo:
Distribution, abundance, feeding behaviour, host preference, parity status and human-biting and infection rates are among the medical entomological parameters evaluated when determining the vector capacity of mosquito species. To evaluate these parameters, mosquitoes must be collected using an appropriate method. Malaria is primarily transmitted by anthropophilic and synanthropic anophelines. Thus, collection methods must result in the identification of the anthropophilic species and efficiently evaluate the parameters involved in malaria transmission dynamics. Consequently, human landing catches would be the most appropriate method if not for their inherent risk. The choice of alternative anopheline collection methods, such as traps, must consider their effectiveness in reproducing the efficiency of human attraction. Collection methods lure mosquitoes by using a mixture of olfactory, visual and thermal cues. Here, we reviewed, classified and compared the efficiency of anopheline collection methods, with an emphasis on Neotropical anthropophilic species, especially Anopheles darlingi, in distinct malaria epidemiological conditions in Brazil.
Resumo:
The uncertainties inherent to experimental differential scanning calorimetric data are evaluated. A new procedure is developed to perform the kinetic analysis of continuous heating calorimetric data when the heat capacity of the sample changes during the crystallization. The accuracy of isothermal calorimetric data is analyzed in terms of the peak-to-peak noise of the calorimetric signal and base line drift typical of differential scanning calorimetry equipment. Their influence in the evaluation of the kinetic parameters is discussed. An empirical construction of the time-temperature and temperature heating rate transformation diagrams, grounded on the kinetic parameters, is presented. The method is applied to the kinetic study of the primary crystallization of Te in an amorphous alloy of nominal composition Ga20Te80, obtained by rapid solidification.
Resumo:
In this work we develop the canonical formalism for constrained systems with a finite number of degrees of freedom by making use of the PoincarCartan integral invariant method. A set of variables suitable for the reduction to the physical ones can be obtained by means of a canonical transformation. From the invariance of the PoincarCartan integral under canonical transformations we get the form of the equations of motion for the physical variables of the system.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
High-throughput prioritization of cancer-causing mutations (drivers) is a key challenge of cancer genome projects, due to the number of somatic variants detected in tumors. One important step in this task is to assess the functional impact of tumor somatic mutations. A number of computational methods have been employed for that purpose, although most were originally developed to distinguish disease-related nonsynonymous single nucleotide variants (nsSNVs) from polymorphisms. Our new method, transformed Functional Impact score for Cancer (transFIC), improves the assessment of the functional impact of tumor nsSNVs by taking into account the baseline tolerance of genes to functional variants.
Resumo:
Even though much improvement has been made in plant transformation methods, the screening of transgenic plants is often a laborious work. Most approaches for detecting the transgene in transformed plants are still timeconsuming, and can be quite expensive. The objective of this study was to search for a simpler method to screen for transgenic plants. The infiltration of kanamycin (100 mg/mL) into tobacco leaves resulted in conspicuous chlorotic spots on the non-transgenic plant leaves, while no spots were seen on the leaves of transformed plants. This reaction occurred regardless of age of the tested plants, and the method has proven to be simple, fast, non-destructive, relatively cheap, and reliable. These results were comparable to those obtained by the polymerase chain reaction (PCR) amplification of the transgene using specific primers.
Resumo:
This work describes the formation of transformation products (TPs) by the enzymatic degradation at laboratory scale of two highly consumed antibiotics: tetracycline (Tc) and erythromycin (ERY). The analysis of the samples was carried out by a fast and simple method based on the novel configuration of the on-line turbulent flow system coupled to a hybrid linear ion trap – high resolution mass spectrometer. The method was optimized and validated for the complete analysis of ERY, Tc and their transformation products within 10 min without any other sample manipulation. Furthermore, the applicability of the on-line procedure was evaluated for 25 additional antibiotics, covering a wide range of chemical classes in different environmental waters with satisfactory quality parameters. Degradation rates obtained for Tc by laccase enzyme and ERY by EreB esterase enzyme without the presence of mediators were ∼78% and ∼50%, respectively. Concerning the identification of TPs, three suspected compounds for Tc and five of ERY have been proposed. In the case of Tc, the tentative molecular formulas with errors mass within 2 ppm have been based on the hypothesis of dehydroxylation, (bi)demethylation and oxidation of the rings A and C as major reactions. In contrast, the major TP detected for ERY has been identified as the “dehydration ERY-A”, with the same molecular formula of its parent compound. In addition, the evaluation of the antibiotic activity of the samples along the enzymatic treatments showed a decrease around 100% in both cases
Resumo:
This study describes the use of electroporation for transforming Xanthomonas axonopodis pv. citri (Xac), the causal agent of citrus (Citrus spp.) canker. It also evaluates the methodology used for this species under different electrical parameters. The bacterium used in the study (Xac 306) was the same strain used for recent complete sequencing of the organism. The use of a plasmid (pUFR047, gentamycin r) is reported here to be able to replicate in cells of Xac. Following the preparation and resuspension of competent cells of Xac at a density of ~4 x 10(10) cfu/ml, in 10% glycerol, and the addition of the replicative plasmid, an electrical pulse was applied to each treatment. Selection of transformants showed a high efficiency of transformation (1.1 x 10(6) transformants/mug DNA), which indicates an effective, and inverse, combination between electrical resistance (50 W) and capacitance (50 µF) for this species, with an electrical field strength of 12.5 kV.cm-1 and 2.7-ms pulse duration. Besides the description of a method for electroporation of Xac 306, this study provides additional information for the use of the technique on studies for production of mutants of this species.
Resumo:
The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.
Resumo:
The mathematical model for two-dimensional unsteady sonic flow, based on the classical diffusion equation with imaginary coefficient, is presented and discussed. The main purpose is to develop a rigorous formulation in order to bring into light the correspondence between the sonic, supersonic and subsonic panel method theory. Source and doublet integrals are obtained and Laplace transformation demonstrates that, in fact, the source integral is the solution of the doublet integral equation. It is shown that the doublet-only formulation reduces to a Volterra integral equation of the first kind and a numerical method is proposed in order to solve it. To the authors' knowledge this is the first reported solution to the unsteady sonic thin airfoil problem through the use of doublet singularities. Comparisons with the source-only formulation are shown for the problem of a flat plate in combined harmonic heaving and pitching motion.
Resumo:
This paper gives a detailed presentation of the Substitution-Newton-Raphson method, suitable for large sparse non-linear systems. It combines the Successive Substitution method and the Newton-Raphson method in such way as to take the best advantages of both, keeping the convergence features of the Newton-Raphson with the low requirements of memory and time of the Successive Substitution schemes. The large system is solved employing few effective variables, using the greatest possible part of the model equations in substitution fashion to fix the remaining variables, but maintaining the convergence characteristics of the Newton-Raphson. The methodology is exemplified through a simple algebraic system, and applied to a simple thermodynamic, mechanical and heat transfer modeling of a single-stage vapor compression refrigeration system. Three distinct approaches for reproducing the thermodynamic properties of the refrigerant R-134a are compared: the linear interpolation from tabulated data, the use of polynomial fitted curves and the use of functions derived from the Helmholtz free energy.
Resumo:
Objective of the study The aim of this study is to understand the institutional implications in Abenomics in a spatial context, the contemporary economic reform taking place in Japan, which is to finally end over two decades of economic malaise. For theoretical perspective of choice, this study explores a synthesis of institutionalism as the main approach, complemented by economies of agglomeration in spatial economics, or New Economic Geography (NEG). The outcomes include a narrative with implications for future research, as well as possible future implications for the economy of Japan, itself. The narrative seeks to depict the dialogue between public discourse and governmental communication in order to create a picture of how this phenomenon is being socially constructed. This is done by studying the official communications by the Cabinet along with public media commentary on respective topics. The reform is studied with reference to historical socio-cultural, economic evolution of Japan, which in turn, is explored through a literature review. This is to assess the unique institutional characteristics of Japan pertinent to reform. Research method This is a social and exploratory qualitative study – an institutional narrative case study. The methodological approach was kept practical: in addition to literature review, a narrative, thematic content analysis with structural emphasis was used to construct the contemporary narrative based on the Cabinet communication. This was combined with practical analytic tools borrowed from critical discourse analysis, which were utilized to assess the implicit intertextual agenda within sources. Findings What appears to characterize the discourse is status quo bias that comes in multiple forms. The bias is also coded in the institutions surrounding the reform, wherein stakeholders have vested interests in protecting the current state of affairs. This correlates with uncertainty avoidance characteristic to Japan. Japan heeds the international criticism to deregulate on a rhetorical level, but consistent with history, the Cabinet solutions appear increasingly bureaucratic. Hence, the imposed western information-age paradigm of liberal cluster agglomeration seems ill-suited to Japan which lacks risk takers and a felicitous entrepreneur culture. The Japanese, however, possess vast innovative potential ascribed to some institutional practices and traits, but restrained by others. The derived conclusion is to study the successful intrapreneur cases in Japanese institutional setting as a potential benchmark for Japan specific cluster agglomeration, and a solution to its structural problems impeding growth.
Resumo:
Carbon dioxide is regarded, nowadays, as a primary anthropogenic greenhouse gas leading to global warming. Hence, chemical fixation of CO2 has attracted much attention as a possible way to manufacture useful chemicals. One of the most interesting approaches of CO2 transformations is the synthesis of organic carbonates. Since conventional production technologies of these compounds involve poisonous phosgene and carbon monoxide, there is a need to develop novel synthetic methods that would better match the principles of "Green Chemistry" towards protection of the environment and human health. Over the years, synthesis of dimethyl carbonate was under intensive investigation in the academia and industry. Therefore, this study was entirely directed towards equally important homologue of carbonic esters family namely diethyl carbonate (DEC). Novel synthesis method of DEC starting from ethanol and CO2 over heterogeneous catalysts based on ceria (CeO2) was studied in the batch reactor. However, the plausible drawback of the reaction is thermodynamic limitations. The calculated values revealed that the reaction is exothermic (ΔrHØ298K = ─ 16.6 J/ ) and does not occur spontaneously at rooms temperature (ΔrGØ 298K = 35.85 kJ/mol). Moreover, co-produced water easily shifts the reaction equilibrium towards reactants excluding achievement of high yields of the carbonate. Therefore, in-situ dehydration has been applied using butylene oxide as a chemical water trap. A 9-fold enhancement in the amount of DEC was observed upon introduction of butylene oxide to the reaction media in comparison to the synthetic method without any water removal. This result confirms that reaction equilibrium was shifted in favour of the desired product and thermodynamic boundaries of the reaction were suppressed by using butylene oxide as a water scavenger. In order to obtain insight into the reaction network, the kinetic experiments were performed over commercial cerium oxide. On the basis of the selectivity/conversion profile it could be concluded that the one-pot synthesis of diethyl carbonate from ethanol, CO2 and butylene oxide occurs via a consecutive route involving cyclic carbonate as an intermediate. Since commercial cerium oxide suffers from the deactivation problems already after first reaction cycle, in-house CeO2 was prepared applying room temperature precipitation technique. Variation of the synthesis parameters such as synthesis time, calcination temperature and pH of the reaction solution turned to have considerable influence on the physico-chemical and catalytic properties of CeO2. The increase of the synthesis time resulted in high specific surface area of cerium oxide and catalyst prepared within 50 h exhibited the highest amount of basic sites on its surface. Furthermore, synthesis under pH 11 yielded cerium oxide with the highest specific surface area, 139 m2/g, among all prepared catalysts. Moreover, CeO2─pH11 catalyst demonstrated the best catalytic activity and 2 mmol of DEC was produced at 180 oC and 9 MPa of the final reaction pressure. In addition, ceria-supported onto high specific surface area silicas MCM-41, SBA-15 and silica gel were synthesized and tested for the first time as catalysts in the synthesis of DEC. Deposition of cerium oxide on MCM-41 and SiO2 supports resulted in a substantial increase of the alkalinity of the carrier materials. Hexagonal SBA-15 modified with 20 wt % of ceria exhibited the second highest basicity in the series of supported catalysts. Evaluation of the catalytic activity of ceria-supported catalysts showed that reaction carried out over 20 wt % CeO2-SBA-15 generated the highest amount of DEC.
Resumo:
Abstract: The purpose of this paper is to show how Gadamer's hermeneutics synthesizes the insights of both Heidegger and Dilthey in order to introduce a new hermeneutics. Gadamer's hermeneutics is based not only on the priority of ontology, as Heidegger insists, and neither is it only a product of life which can be objectively understood through study and rigorous method, as Dilthey suggests. For Gadamer, hermeneutics is the bringing together of ontology in terms of history. By this synthesis Gadamer not only places himself within the context of a Lebensphilosophie, but also shows that it is within language that Being can be disclosed according to a lived context. Throughout this paper the philosophies ofDilthey and Heidegger are explicated within a historical context as to bring out how, and why, Gadamer sees the need to surpass these philosophies. Through Gadamer's philosophy of play and the game, language, the dialogical model, application, and the fusion of horizons we can see how Gadamer's critique and questioning of these two philosophy leads to his new hermeneutics. Special attention is paid to the role in which these two contrasting philosophies were used to complement each other in the product of Gadamer' s philosophical hermeneutics as it is presented in his major work Truth andMethod. For Gadamer, the task of understanding is never complete. Therefore, his hermeneutics remains a dynamic structure with which we can always question the past and our traditions. This paper seeks to show his philosophical movements within these questions