949 resultados para technology acceptance model (TAM)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.

The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.

The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.

In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid rise in the residential photo voltaic (PV) adoptions in the past half decade has created a need in the electricity industry for a widely-accessible model that estimates PV adoption based on a combination of different business and policy decisions. This work analyzes historical adoption patterns and finds fiscal savings to be the single most important factor in PV adoption, with significantly greater predictive power compared to all other socioeconomic factors including income and education. We can create an application available on Google App Engine (GAE) based on our findings that allows all stakeholders including policymakers, power system researchers and regulators to study the complex and coupled relationship between PV adoption, utility economics and grid sustainability. The application allows users to experiment with different customer demographics, tier structures and subsidies, hence allowing them to tailor the application to the geographic region they are studying. This study then demonstrates the different type of analyses possible with the application by studying the relative impact of different policies regarding tier structures, fixed charges and PV prices on PV adoption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding friction and adhesion in static and sliding contact of surfaces is important in numerous physical phenomena and technological applications. Most surfaces are rough at the microscale, and thus the real area of contact is only a fraction of the nominal area. The macroscopic frictional and adhesive response is determined by the collective behavior of the population of evolving and interacting microscopic contacts. This collective behavior can be very different from the behavior of individual contacts. It is thus important to understand how the macroscopic response emerges from the microscopic one. In this thesis, we develop a theoretical and computational framework to study the collective behavior. Our philosophy is to assume a simple behavior of a single asperity and study the collective response of an ensemble. Our work bridges the existing well-developed studies of single asperities with phenomenological laws that describe macroscopic rate-and-state behavior of frictional interfaces. We find that many aspects of the macroscopic behavior are robust with respect to the microscopic response. This explains why qualitatively similar frictional features are seen for a diverse range of materials. We first show that the collective response of an ensemble of one-dimensional independent viscoelastic elements interacting through a mean field reproduces many qualitative features of static and sliding friction evolution. The resulting macroscopic behavior is different from the microscopic one: for example, even if each contact is velocity-strengthening, the macroscopic behavior can be velocity-weakening. The framework is then extended to incorporate three-dimensional rough surfaces, long- range elastic interactions between contacts, and time-dependent material behaviors such as viscoelasticity and viscoplasticity. Interestingly, the mean field behavior dominates and the elastic interactions, though important from a quantitative perspective, do not change the qualitative macroscopic response. Finally, we examine the effect of adhesion on the frictional response as well as develop a force threshold model for adhesion and mode I interfacial cracks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The molecular inputs necessary for cell behavior are vital to our understanding of development and disease. Proper cell behavior is necessary for processes ranging from creating one’s face (neural crest migration) to spreading cancer from one tissue to another (invasive metastatic cancers). Identifying the genes and tissues involved in cell behavior not only increases our understanding of biology but also has the potential to create targeted therapies in diseases hallmarked by aberrant cell behavior.

A well-characterized model system is key to determining the molecular and spatial inputs necessary for cell behavior. In this work I present the C. elegans uterine seam cell (utse) as an ideal model for studying cell outgrowth and shape change. The utse is an H-shaped cell within the hermaphrodite uterus that functions in attaching the uterus to the body wall. Over L4 larval stage, the utse grows bidirectionally along the anterior-posterior axis, changing from an ellipsoidal shape to an elongated H-shape. Spatially, the utse requires the presence of the uterine toroid cells, sex muscles, and the anchor cell nucleus in order to properly grow outward. Several gene families are involved in utse development, including Trio, Nav, Rab GTPases, Arp2/3, as well as 54 other genes found from a candidate RNAi screen. The utse can be used as a model system for studying metastatic cancer. Meprin proteases are involved in promoting invasiveness of metastatic cancers and the meprin-likw genes nas-21, nas-22, and toh-1 act similarly within the utse. Studying nas-21 activity has also led to the discovery of novel upstream inhibitors and activators as well as targets of nas-21, some of which have been characterized to affect meprin activity. This illustrates that the utse can be used as an in vivo model for learning more about meprins, as well as various other proteins involved in metastasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.

Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.

Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been well-established that interfaces in crystalline materials are key players in the mechanics of a variety of mesoscopic processes such as solidification, recrystallization, grain boundary migration, and severe plastic deformation. In particular, interfaces with complex morphologies have been observed to play a crucial role in many micromechanical phenomena such as grain boundary migration, stability, and twinning. Interfaces are a unique type of material defect in that they demonstrate a breadth of behavior and characteristics eluding simplified descriptions. Indeed, modeling the complex and diverse behavior of interfaces is still an active area of research, and to the author's knowledge there are as yet no predictive models for the energy and morphology of interfaces with arbitrary character. The aim of this thesis is to develop a novel model for interface energy and morphology that i) provides accurate results (especially regarding "energy cusp" locations) for interfaces with arbitrary character, ii) depends on a small set of material parameters, and iii) is fast enough to incorporate into large scale simulations.

In the first half of the work, a model for planar, immiscible grain boundary is formulated. By building on the assumption that anisotropic grain boundary energetics are dominated by geometry and crystallography, a construction on lattice density functions (referred to as "covariance") is introduced that provides a geometric measure of the order of an interface. Covariance forms the basis for a fully general model of the energy of a planar interface, and it is demonstrated by comparison with a wide selection of molecular dynamics energy data for FCC and BCC tilt and twist boundaries that the model accurately reproduces the energy landscape using only three material parameters. It is observed that the planar constraint on the model is, in some cases, over-restrictive; this motivates an extension of the model.

In the second half of the work, the theory of faceting in interfaces is developed and applied to the planar interface model for grain boundaries. Building on previous work in mathematics and materials science, an algorithm is formulated that returns the minimal possible energy attainable by relaxation and the corresponding relaxed morphology for a given planar energy model. It is shown that the relaxation significantly improves the energy results of the planar covariance model for FCC and BCC tilt and twist boundaries. The ability of the model to accurately predict faceting patterns is demonstrated by comparison to molecular dynamics energy data and experimental morphological observation for asymmetric tilt grain boundaries. It is also demonstrated that by varying the temperature in the planar covariance model, it is possible to reproduce a priori the experimentally observed effects of temperature on facet formation.

Finally, the range and scope of the covariance and relaxation models, having been demonstrated by means of extensive MD and experimental comparison, future applications and implementations of the model are explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.

II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.

Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.

III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.

IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study has been made of the reaction mechanism of a model system for enzymatic hydroxylation. The results of a kinetic study of the hydroxylation of 2-hydroxyazobenzene derivatives by cupric ion and hydrogen peroxide are presented. An investigation of kinetic orders indicates that hydroxylation proceeds by way of a coordinated intermediate complex consisting of cupric ion and the mono anions of 2-hydroxyazobenzene and hydrogen peroxide. Studies with deuterated substrate showed the absence of a primary kinetic isotope effect and no evidence of an NIH shift. The effect of substituents on the formation of intermediate complexes and the overall rate of hydroxylation was studied quantitatively in aqueous solution. The combined results indicate that the hydroxylation step is only slightly influenced by ring substitution. The substituent effect is interpreted in terms of reaction by a radical path or a concerted mechanism in which the formation of ionic intermediates is avoided. The reaction mechanism is discussed as a model for enzymatic hydroxylation.