986 resultados para generalized function
Resumo:
Adhesive contact model between an elastic cylinder and an elastic half space is studied in the present paper, in which an external pulling force is acted on the above cylinder with an arbitrary direction and the contact width is assumed to be asymmetric with respect to the structure. Solutions to the asymmetric model are obtained and the effect of the asymmetric contact width on the whole pulling process is mainly discussed. It is found that the smaller the absolute value of Dundurs' parameter beta or the larger the pulling angle theta, the more reasonable the symmetric model would be to approximate the asymmetric one.
Resumo:
(PDF contains 83 pages.)
Resumo:
The effective stress principle has been efficiently applied to saturated soils in the soil mechanics and geotechnical engineering practice; however, its applicability to unsaturated soils is still under debate. The appropriate selection of stress state variables is essential for the construction of constitutive models for unsaturated soils. Owing to the complexity of unsaturated soils, it is difficult to determine the deformation and strength behaviors of unsaturated soils uniquely with the previous single-effective-stress variable theory and two-effective-stress-variable theory in all the situations. In this paper, based on the porous media theory, the specific expression of work is proposed, and the effective stress of unsaturated soils conjugated with the displacement of the soil skeleton is further derived. In the derived work and energy balance equations, the energy dissipation in unsaturated soils is taken into account. According to the derived work and energy balance equations, all of the three generalized stresses and the conjugated strains have effects on the deformation of unsaturated soils. For considering these effects, a principle of generalized effective stress to describe the behaviors of unsaturated soils is proposed. The proposed principle of generalized effective stress may reduce to the previous effective stress theory of single-stress variable or the two-stress variables under certain conditions. This principle provides a helpful reference for the development of constitutive models for unsaturated soils.
Resumo:
This dissertation is concerned with the problem of determining the dynamic characteristics of complicated engineering systems and structures from the measurements made during dynamic tests or natural excitations. Particular attention is given to the identification and modeling of the behavior of structural dynamic systems in the nonlinear hysteretic response regime. Once a model for the system has been identified, it is intended to use this model to assess the condition of the system and to predict the response to future excitations.
A new identification methodology based upon a generalization of the method of modal identification for multi-degree-of-freedom dynaimcal systems subjected to base motion is developed. The situation considered herein is that in which only the base input and the response of a small number of degrees-of-freedom of the system are measured. In this method, called the generalized modal identification method, the response is separated into "modes" which are analogous to those of a linear system. Both parametric and nonparametric models can be employed to extract the unknown nature, hysteretic or nonhysteretic, of the generalized restoring force for each mode.
In this study, a simple four-term nonparametric model is used first to provide a nonhysteretic estimate of the nonlinear stiffness and energy dissipation behavior. To extract the hysteretic nature of nonlinear systems, a two-parameter distributed element model is then employed. This model exploits the results of the nonparametric identification as an initial estimate for the model parameters. This approach greatly improves the convergence of the subsequent optimization process.
The capability of the new method is verified using simulated response data from a three-degree-of-freedom system. The new method is also applied to the analysis of response data obtained from the U.S.-Japan cooperative pseudo-dynamic test of a full-scale six-story steel-frame structure.
The new system identification method described has been found to be both accurate and computationally efficient. It is believed that it will provide a useful tool for the analysis of structural response data.
Resumo:
The author has constructed a synthetic gene for ∝-lytic protease. Since the DNA sequence of the protein is not known, the gene was designed by using the reverse translation of ∝-lytic protease's amino acid sequence. Unique restriction sites are carefully sought in the degenerate DNA sequence to aid in future mutagenesis studies. The unique restriction sites are designed approximately 50 base pairs apart and their appropriate codons used in the DNA sequence. The codons used to construct the DNA sequence of ∝-lytic protease are preferred codons in E-coli or used in the production of β-lactamase. Codon usage is also distributed evenly to ensure that one particular codon is not heavily used. The gene is essentially constructed from the outside in. The gene is built in a stepwise fashion using plasmids as the vehicles for the ∝-lytic oligomers. The use of plasmids allows the replication and isolation of large quantities of the intermediates during gene synthesis. The ∝-lytic DNA is a double-stranded oligomer that has sufficient overhang and sticky ends to anneal correctly in the vector. After six steps of incorporating ∝-lytic DNA, the gene is completed and sequenced to ensure that the correct DNA sequence is present and that no mutations occurred in the structural gene.
β-lactamase is the other serine hydrolase studied in this thesis. The author used the class A RTEM-1 β- lactamase encoded on the plasmid pBR322 to investigate the roll of the conserved threonine residue at position 71. Cassette mutagenesis was previously used to generate all possible amino acid substitutions at position 71. The work presented here describes the purification and kinetic characterization of a T71H mutant previously constructed by S. Schultz. The mutated gene was transferred into plasmid pJN for expression and induced with IPTG. The enzyme is purified by column chromatography and FPLC to homogeneity. Kinetic studies reveal that the mutant has lower k_(cat) values on benzylpenicillin, cephalothin and 6-aminopenicillanic acid but no changes in k_m except for cephalothin which is approximately 4 times higher. The mutant did not change siginificantly in its pH profile compared to the wild-type enzyme. Also, the mutant is more sensitive to thermal denaturation as compared to the wild-type enzyme. However, experimental evidence indicates that the probable generation of a positive charge at position 71 thermally stabilized the mutant.
Resumo:
In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.
To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.
In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.
Resumo:
During inflammation and infection, hematopoietic stem and progenitor cells (HSPCs) are stimulated to proliferate and differentiate into mature immune cells, especially of the myeloid lineage. MicroRNA-146a (miR-146a) is a critical negative regulator of inflammation. Deletion of the gene encoding miR-146a—expressed in all blood cell types—produces effects that appear as dysregulated inflammatory hematopoiesis, leading to a decline in the number and quality of hematopoietic stem cells (HSCs), excessive myeloproliferation, and, ultimately, to exhaustion of the HSCs and hematopoietic neoplasms. Six-week-old deleted mice are normal, with no effect on cell numbers, but by 4 months bone marrow hypercellularity can be seen, and by 8 months marrow exhaustion is becoming evident. The ability of HSCs to replenish the entire hematopoietic repertoire in a myelo-ablated mouse also declines precipitously as miR-146a-deficient mice age. In the absence of miR-146a, LPS-mediated serial inflammatory stimulation accelerates the effects of aging. This chronic inflammatory stress on HSCs in deleted mice involves a molecular axis consisting of upregulation of the signaling protein TRAF6 leading to excessive activity of the transcription factor NF-κB and overproduction of the cytokine IL-6. At the cellular level, transplant studies show that the defects are attributable to both an intrinsic problem in the miR-146a-deficient HSCs and extrinsic effects of miR-146a-deficient lymphocytes and non-hematopoietic cells. This study has identified a microRNA, miR-146a, to be a critical regulator of HSC homeostasis during chronic inflammatory challenge in mice and has provided a molecular connection between chronic inflammation and the development of bone marrow failure and myeloproliferative neoplasms. This may have implications for human hematopoietic malignancies, such as myelodysplastic syndrome, which frequently displays downregulated miR-146a expression.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Using a nonperturbative quantum scattering theory, the photoelectron angular distributions (PADs) from the multiphoton detachment of H- ions in strong, linearly polarized infrared laser fields are obtained to interpret recent experimental observations. In our theoretical treatment, the PADs in n-photon detachment are determined by the nth-order generalized phased Bessel (GPB) functions X-n(Z(f),eta). The advantage of using the GPB scenario to calculate PADs is its simplicity: a single special function (GPB) without any mixing coefficient can express PADs observed by recent experiments. Thus, the GPB scenario can be called a parameterless scenario.
Resumo:
The SCF ubiquitin ligase complex of budding yeast triggers DNA replication by cata lyzi ng ubiquitination of the S phase CDK inhibitor SIC1. SCF is composed of several evolutionarily conserved proteins, including ySKP1, CDC53 (Cullin), and the F-box protein CDC4. We isolated hSKP1 in a two-hybrid screen with hCUL1, the human homologue of CDC53. We showed that hCUL1 associates with hSKP1 in vivo and directly interacts with hSKP1 and the human F-box protein SKP2 in vitro, forming an SCF-Iike particle. Moreover, hCUL1 complements the growth defect of yeast CDC53^(ts) mutants, associates with ubiquitination-promoting activity in human cell extracts, and can assemble into functional, chimeric ubiquitin ligase complexes with yeast SCF components. These data demonstrated that hCUL1 functions as part of an SCF ubiquitin ligase complex in human cells. However, purified human SCF complexes consisting of CUL1, SKP1, and SKP2 are inactive in vitro, suggesting that additional factors are required.
Subsequently, mammalian SCF ubiquitin ligases were shown to regulate various physiological processes by targeting important cellular regulators, like lĸBα, β-catenin, and p27, for ubiquitin-dependent proteolysis by the 26S proteasome. Little, however, is known about the regulation of various SCF complexes. By using sequential immunoaffinity purification and mass spectrometry, we identified proteins that interact with human SCF components SKP2 and CUL1 in vivo. Among them we identified two additional SCF subunits: HRT1, present in all SCF complexes, and CKS1, that binds to SKP2 and is likely to be a subunit of SCF5^(SKP2) complexes. Subsequent work by others demonstrated that these proteins are essential for SCF activity. We also discovered that COP9 Signalosome (CSN), previously described in plants as a suppressor of photomorphogenesis, associates with CUL1 and other SCF subunits in vivo. This interaction is evolutionarily conserved and is also observed with other Cullins, suggesting that all Cullin based ubiquitin ligases are regulated by CSN. CSN regulates Cullin Neddylation presumably through CSNS/JAB1, a stochiometric Signalosome subunit and a putative deneddylating enzyme. This work sheds light onto an intricate connection that exists between signal transduction pathways and protein degradation machinery inside the cell and sets stage for gaining further insights into regulation of protein degradation.
Resumo:
A new approach based on the gated integration technique is proposed for the accurate measurement of the autocorrelation function of speckle intensities scattered from a random phase screen. The Boxcar used for this technique in the acquisition of the speckle intensity data integrates the photoelectric signal during its sampling gate open, and it repeats the sampling by a preset number, in. The average analog of the in samplings output by the Boxcar enhances the signal-to-noise ratio by root m, because the repeated sampling and the average make the useful speckle signals stable, while the randomly varied photoelectric noise is suppressed by 1/ root m. In the experiment, we use an analog-to-digital converter module to synchronize all the actions such as the stepped movement of the phase screen, the repeated sampling, the readout of the averaged output of the Boxcar, etc. The experimental results show that speckle signals are better recovered from contaminated signals, and the autocorrelation function with the secondary maximum is obtained, indicating that the accuracy of the measurement of the autocorrelation function is greatly improved by the gated integration technique. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.