999 resultados para Enzyme Tests
Resumo:
The seasonal stability tests of Canova & Hansen (1995) (CH) provide a method complementary to that of Hylleberg et al. (1990) for testing for seasonal unit roots. But the distribution of the CH tests are unknown in small samples. We present a method to numerically compute critical values and P-values for the CH tests for any sample size and any seasonal periodicity. In fact this method is applicable to the types of seasonality which are commonly in use, but also to any other.
Resumo:
Some factors that affect the experimental results in nanoindentation tests such as the contact depth, contact area, load and loading duration are analyzed in this article. Combining with the results of finite element numerical simulation, we find that the creep property of the tested material is one of the important factors causing the micron indentation hardness descending with the increase of indentation depth. The analysis of experimental results with different indentation depths demonstrates that the hardness decrease can be bated if the continuous stiffness measurement technique is not adopted; this indicates that the test method itself may also be one of the factors causing the hardness being descended.
Resumo:
Separating the dynamics of variables that evolve on different timescales is a common assumption in exploring complex systems, and a great deal of progress has been made in understanding chemical systems by treating independently the fast processes of an activated chemical species from the slower processes that proceed activation. Protein motion underlies all biocatalytic reactions, and understanding the nature of this motion is central to understanding how enzymes catalyze reactions with such specificity and such rate enhancement. This understanding is challenged by evidence of breakdowns in the separability of timescales of dynamics in the active site form motions of the solvating protein. Quantum simulation methods that bridge these timescales by simultaneously evolving quantum and classical degrees of freedom provide an important method on which to explore this breakdown. In the following dissertation, three problems of enzyme catalysis are explored through quantum simulation.
Resumo:
Homologous recombination is a source of diversity in both natural and directed evolution. Standing genetic variation that has passed the test of natural selection is combined in new ways, generating functional and sometimes unexpected changes. In this work we evaluate the utility of homologous recombination as a protein engineering tool, both in comparison with and combined with other protein engineering techniques, and apply it to an industrially important enzyme: Hypocrea jecorina Cel5a.
Chapter 1 reviews work over the last five years on protein engineering by recombination. Chapter 2 describes the recombination of Hypocrea jecorina Cel5a endoglucanase with homologous enzymes in order to improve its activity at high temperatures. A chimeric Cel5a that is 10.1 °C more stable than wild-type and hydrolyzes 25% more cellulose at elevated temperatures is reported. Chapter 3 describes an investigation into the synergy of thermostable cellulases that have been engineered by recombination and other methods. An engineered endoglucanase and two engineered cellobiohydrolases synergistically hydrolyzed cellulose at high temperatures, releasing over 200% more reducing sugars over 60 h at their optimal mixture relative to the best mixture of wild-type enzymes. These results provide a framework for engineering cellulolytic enzyme mixtures for the industrial conditions of high temperatures and long incubation times.
In addition to this work on recombination, we explored three other problems in protein engineering. Chapter 4 describes an investigation into replacing enzymes with complex cofactors with simple cofactors, using an E. coli enolase as a model system. Chapter 5 describes engineering broad-spectrum aldehyde resistance in Saccharomyces cerevisiae by evolving an alcohol dehydrogenase simultaneously for activity and promiscuity. Chapter 6 describes an attempt to engineer gene-targeted hypermutagenesis into E. coli to facilitate continuous in vivo selection systems.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
O transtorno depressivo (TD) é um fator de risco cardiovascular independente que apresenta elevada morbi-mortalidade. Recentes evidências sugerem a participação do óxido nítrico (NO), potente vasodilatador e anti-agregante plaquetário, na patogênese de doenças cardiovasculares e psiquiátricas. A síntese do NO ocorre através da conversão do aminoácido L-arginina em L-citrulina e NO, pela ação da enzima NO sintase (NOS). Esta tese aborda o papel da via L-arginina-NO em plaquetas de pacientes com TD e sua associação com a função plaquetária e estresse oxidativo. Para análise comportamental da depressão em modelo animal, foi utilizado o modelo de estresse pós-natal de separação única (SMU). Os animais foram divididos em quatro grupos para a realização do estudo: Grupo Controle Sedentário (GCS), Grupo Controle Exercício (GCE), Grupo SMU Sedentário (SMUS) e Grupo SMU Exercício (SMUE). O treinamento físico (TF) dos animais englobou 8 semanas, com duração de 30 minutos e uma velocidade de treinamento estabelecida pelo teste máximo (TE). Para o estudo em humanos, 10 pacientes com TD com score Hamilton: 201, (média de idade: 384anos), foram pareados com 10 indivíduos saudáveis (média de idade: 383anos). Os estudos em humanos e animais foram aprovados pelos Comitês de Ética: 1436 - CEP/HUPE e CEUA/047/2010, respectivamente. Foi mensurado em humanos e em animais: transporte de L-arginina, concentração GMPc, atividade das enzimas NOS e superóxido dismutase (SOD) em plaquetas e cortisol sistêmico. Experimentos realizados somente em humanos: expressão das enzimas NOS, arginase e guanilato ciclase através de Western Blotting. A agregação plaquetária foi induzida por colágeno e foi realizada análise sistêmica de proteína C-reativa, fibrinogênio e L-arginina. Para o tratamento estatístico utilizou-se três testes estatísticos para avaliar as diferenças das curvas de sobrevida: Kaplan-Meier, e os testes de Tarone-Ware e Peto-Prentice. Em humanos, houve uma redução do transporte de L-arginina, da atividade das enzimas NOS e SOD, e da concentração de GMPc em plaquetas, e nas concentrações plasmáticas de L-arginina no grupo com TD em relação ao grupo controle. Foi observado um aumento dos níveis plasmáticos de fibrinogênio no TD. Esses resultados demonstram uma inibição da via L-arginina-NO-GMPc e da enzima anti-oxidante SOD em pacientes com TD sem afetar a função plaquetária. Em relação ao TF, para o modelo animal, foram encontradas alterações iniciais quanto à distância percorrida e tempo de execução do TE entre os grupos controles e o grupos SMUs, apresentando estes últimos menores valores para o TE. Após 8 semanas de TF, verificou-se um maior influxo no transporte de L-arginina para o SMUE em comparação ao grupo SMUS. As diferenças observadas para o tempo e a distância percorrida no TE inicial entre os grupos controle e no modelo de estresse foram revertidas após as 8 semanas de TF, demonstrando o efeito benéfico do exercício físico na capacidade cardiorespiratória em modelos de depressão.
Resumo:
There is a constantly increasing collection of manufactured substances, whose effectiveness in the sustenance of ducks is under investigation. The author examines the effects of some substances already previously tested and also there were examined substances which had not hitherto been studied. The use of different supplements for late autumn fattening is studied through various experiments.
Resumo:
The study of enzymatic activity is of great importance in the immunology of fungi. Indeed, knowledge of biological activity of antigenic structures is important for the elucidation of host-parasite relations as well as in the search for a taxonomic factor permitting differential diagnoses. The authors used Saprolegnia cultures to analyse soluble antigenic fractions arising from the mycelium of cultures of 4 species of Saprolegnia, which are found most frequently in the parasitic state on fish: S. parasitica, S. ferax, S. delica, S. diclina. The authors conclude that in the study of saprolegniasis, the enzymatic approach affords new elements for the examination of the etiology of fungi as well as an element of gravity concerning the biochemical modifications necessary to the change of saprophytism to parasitism.