949 resultados para technology acceptance model (TAM)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first chapter of this thesis deals with automating data gathering for single cell microfluidic tests. The programs developed saved significant amounts of time with no loss in accuracy. The technology from this chapter was applied to experiments in both Chapters 4 and 5.

The second chapter describes the use of statistical learning to prognose if an anti-angiogenic drug (Bevacizumab) would successfully treat a glioblastoma multiforme tumor. This was conducted by first measuring protein levels from 92 blood samples using the DNA-encoded antibody library platform. This allowed the measure of 35 different proteins per sample, with comparable sensitivity to ELISA. Two statistical learning models were developed in order to predict whether the treatment would succeed. The first, logistic regression, predicted with 85% accuracy and an AUC of 0.901 using a five protein panel. These five proteins were statistically significant predictors and gave insight into the mechanism behind anti-angiogenic success/failure. The second model, an ensemble model of logistic regression, kNN, and random forest, predicted with a slightly higher accuracy of 87%.

The third chapter details the development of a photocleavable conjugate that multiplexed cell surface detection in microfluidic devices. The method successfully detected streptavidin on coated beads with 92% positive predictive rate. Furthermore, chambers with 0, 1, 2, and 3+ beads were statistically distinguishable. The method was then used to detect CD3 on Jurkat T cells, yielding a positive predictive rate of 49% and false positive rate of 0%.

The fourth chapter talks about the use of measuring T cell polyfunctionality in order to predict whether a patient will succeed an adoptive T cells transfer therapy. In 15 patients, we measured 10 proteins from individual T cells (~300 cells per patient). The polyfunctional strength index was calculated, which was then correlated with the patient's progress free survival (PFS) time. 52 other parameters measured in the single cell test were correlated with the PFS. No statistical correlator has been determined, however, and more data is necessary to reach a conclusion.

Finally, the fifth chapter talks about the interactions between T cells and how that affects their protein secretion. It was observed that T cells in direct contact selectively enhance their protein secretion, in some cases by over 5 fold. This occurred for Granzyme B, Perforin, CCL4, TNFa, and IFNg. IL- 10 was shown to decrease slightly upon contact. This phenomenon held true for T cells from all patients tested (n=8). Using single cell data, the theoretical protein secretion frequency was calculated for two cells and then compared to the observed rate of secretion for both two cells not in contact, and two cells in contact. In over 90% of cases, the theoretical protein secretion rate matched that of two cells not in contact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of meso-phenyloctamethylporphyrins covalently bonded at the 4'phenyl position to quinones via rigid bicyclo[2.2.2]octane spacers were synthesized for the study of the dependence of electron transfer reaction rate on solvent, distance, temperature, and energy gap. A general and convergent synthesis was developed based on the condensation of ac-biladienes with masked quinonespacer-benzaldehydes. From picosecond fluorescence spectroscopy emission lifetimes were measured in seven solvents of varying polarity. Rate constants were determined to vary from 5.0x109sec-1 in N,N-dimethylformamide to 1.15x1010 Sec-1 in benzene, and were observed to rise at most by about a factor of three with decreasing solvent polarity. Experiments at low temperature in 2-MTHF glass (77K) revealed fast, nearly temperature-independent electron transfer characterized by non-exponential fluorescence decays, in contrast to monophasic behavior in fluid solution at 298K. This example evidently represents the first photosynthetic model system not based on proteins to display nearly temperature-independent electron transfer at high temperatures (nuclear tunneling). Low temperatures appear to freeze out the rotational motion of the chromophores, and the observed nonexponential fluorescence decays may be explained as a result of electron transfer from an ensemble of rotational conformations. The nonexponentiality demonstrates the sensitivity of the electron transfer rate to the precise magnitude of the electronic matrix element, which supports the expectation that electron transfer is nonadiabatic in this system. The addition of a second bicyclooctane moiety (15 Å vs. 18 Å edge-to-edge between porphyrin and quinone) reduces the transfer rate by at least a factor of 500-1500. Porphyrinquinones with variously substituted quinones allowed an examination of the dependence of the electron transfer rate constant κET on reaction driving force. The classical trend of increasing rate versus increasing exothermicity occurs from 0.7 eV≤ |ΔG0'(R)| ≤ 1.0 eV until a maximum is reached (κET = 3 x 108 sec-1 rising to 1.15 x 1010 sec-1 in acetonitrile). The rate remains insensitive to ΔG0 for ~ 300 mV from 1.0 eV≤ |ΔG0’(R)| ≤ 1.3 eV, and then slightly decreases in the most exothermic case studied (cyanoquinone, κET = 5 x 109 sec-1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to their high specific strength and low density, magnesium and magnesium-based alloys have gained great technological importance in recent years. However, their underlying hexagonal crystal structure furnishes Mg and its alloys with a complex mechanical behavior because of their comparably smaller number of energetically favorable slip systems. Besides the commonly studied slip mechanism, another way to accomplish general deformation is through the additional mechanism of deformation-induced twinning. The main aim of this thesis research is to develop an efficient continuum model to understand and ultimately predict the material response resulting from the interaction between these two mechanisms.

The constitutive model we present is based on variational constitutive updates of plastic slips and twin volume fractions and accounts for the related lattice reorientation mechanisms. The model is applied to single- and polycrystalline pure magnesium. We outline the finite-deformation plasticity model combining basal, pyramidal, and prismatic dislocation activity as well as a convexification based approach for deformation twinning. A comparison with experimental data from single-crystal tension-compression experiments validates the model and serves for parameter identification. The extension to polycrystals via both Taylor-type modeling and finite element simulations shows a characteristic stress-strain response that agrees well with experimental observations for polycrystalline magnesium. The presented continuum model does not aim to represent the full details of individual twin-dislocation interactions, yet it is sufficiently efficient to allow for finite element simulations while qualitatively capturing the underlying microstructural deformation mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model is proposed in this thesis for the control mechanism of free fatty acid-glucose metabolism in healthy individuals under resting conditions. The objective is to explain in a consistent manner some clinical laboratory observations such as glucose, insulin and free fatty acid responses to intravenous injection of glucose, insulin, etc. Responses up to only about two hours from the beginning of infusion are considered. The model is an extension of the one for glucose homeostasis proposed by Charette, Kadish and Sridhar (Modeling and Control Aspects of Glucose Homeostasis. Mathematical Biosciences, 1969). It is based upon a systems approach and agrees with the current theories of glucose and free fatty acid metabolism. The description is in terms of ordinary differential equations. Validation of the model is based on clinical laboratory data available at the present time. Finally procedures are suggested for systematically identifying the parameters associated with the free fatty acid portion of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Present experimental data on nucleon-antinucleon scattering allow a study of the possibility of a phase transition in a nucleon-antinucleon gas at high temperature. Estimates can be made of the general behavior of the elastic phase shifts without resorting to theoretical derivation. A phase transition which separates nucleons from antinucleons is found at about 280 MeV in the approximation of the second virial coefficient to the free energy of the gas.

Part II

The parton model is used to derive scaling laws for the hadrons observed in deep inelastic electron-nucleon scattering which lie in the fragmentation region of the virtual photon. Scaling relations are obtained in the Bjorken and Regge regions. It is proposed that the distribution functions become independent of both q2 and ν where the Bjorken and Regge regions overlap. The quark density functions are discussed in the limit x→1 for the nucleon octet and the pseudoscalar mesons. Under certain plausible assumptions it is found that only one or two quarks of the six types of quarks and antiquarks have an appreciable density function in the limit x→1. This has implications for the quark fragmentation functions near the large momentum boundary of their fragmentation region. These results are used to propose a method of measuring the proton and neutron quark density functions for all x by making measurements on inclusively produced hadrons in electroproduction only. Implications are also discussed for the hadrons produced in electron-positron annihilation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple model potential is used to calculate Rydberg series for the molecules: nitrogen, oxygen, nitric oxide, carbon monoxide, carbon dioxide, nitrogen dioxide, nitrous oxide, acetylene, formaldehyde, formic acid, diazomethane, ketene, ethylene, allene, acetaldehyde, propyne, acrolein, dimethyl ether, 1, 3-butadiene, 2-butene, and benzene. The model potential for a molecule is taken as the sum of atomic potentials, which are calibrated to atomic data and contain no further parameters. Our results agree with experimentally measured values to within 5-10% in all cases. The results of these calculations are applied to many unresolved problems connected with the above molecules. Some of the more notable of these problems are the reassignment of states in carbon monoxide, the first ionization potential of nitrogen dioxide, the interpretation of the V state in ethylene, and the mystery bands in substituted ethylenes, the identification of the R and R’ series in benzene and the determination of the orbital scheme in benzene from electron impact data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.

In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.

A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general solution is presented for water waves generated by an arbitrary movement of the bed (in space and time) in a two-dimensional fluid domain with a uniform depth. The integral solution which is developed is based on a linearized approximation to the complete (nonlinear) set of governing equations. The general solution is evaluated for the specific case of a uniform upthrust or downthrow of a block section of the bed; two time-displacement histories of the bed movement are considered.

An integral solution (based on a linear theory) is also developed for a three-dimensional fluid domain of uniform depth for a class of bed movements which are axially symmetric. The integral solution is evaluated for the specific case of a block upthrust or downthrow of a section of the bed, circular in planform, with a time-displacement history identical to one of the motions used in the two-dimensional model.

Since the linear solutions are developed from a linearized approximation of the complete nonlinear description of wave behavior, the applicability of these solutions is investigated. Two types of non-linear effects are found which limit the applicability of the linear theory: (1) large nonlinear effects which occur in the region of generation during the bed movement, and (2) the gradual growth of nonlinear effects during wave propagation.

A model of wave behavior, which includes, in an approximate manner, both linear and nonlinear effects is presented for computing wave profiles after the linear theory has become invalid due to the growth of nonlinearities during wave propagation.

An experimental program has been conducted to confirm both the linear model for the two-dimensional fluid domain and the strategy suggested for determining wave profiles during propagation after the linear theory becomes invalid. The effect of a more general time-displacement history of the moving bed than those employed in the theoretical models is also investigated experimentally.

The linear theory is found to accurately approximate the wave behavior in the region of generation whenever the total displacement of the bed is much less than the water depth. Curves are developed and confirmed by the experiments which predict gross features of the lead wave propagating from the region of generation once the values of certain nondimensional parameters (which characterize the generation process) are known. For example, the maximum amplitude of the lead wave propagating from the region of generation has been found to never exceed approximately one-half of the total bed displacement. The gross features of the tsunami resulting from the Alaskan earthquake of 27 March 1964 can be estimated from the results of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in our knowledge of the genetic structure of human caliciviruses (HuCVs) and small round-structured viruses (SRSVs) have led to the development of polymerase chain reaction (PCR)-based molecular tests specific for these viruses. These methods have been developed to detect a number of human pathogenic viruses in environmental samples including water, sewage and shellfish. HuCVs and SRSVs are not culturable, and no animal model is currently available. Therefore there is no convenient method of preparing viruses for study or for reagent production. One problem facing those attempting to use PCR-based methods for the detection of HuCVs and SRSVs is the lack of a suitable positive control substrate. This is particularly important when screening complex samples in which the levels of inhibitors present may significantly interfere with amplificiation. Regions within the RNA polymerase regions of two genetically distinct human caliciviruses have been amplified and used to produce recombinant baculoviruses which express RNA corresponding to the calicivirus polymerase. This RNA is being investigated as a positive control substrate for PCR testing, using current diagnostic primer sets. Recombinant baculovirus technology will enable efficient and cost-effective production of large quantities of positive control RNA with a specific known genotype. We consider the development of these systems as essential for successful screening and monitoring applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the 2 x 2 (electric field) cross-spectral density matrix, a model for an electromagnetic J(0)-correlated Schell-model beam is given that is a generalization of the scalar J(0)-correlated Schell-model beam. The conditions that the matrix for the source to generate an electromagnetic J(0)-correlated Schell-model beam are obtained. The condition for the source to generate a scalar J(0)-correlated Schell-model beam can be considered as a special case. (C) 2008 Optical Society of America