850 resultados para Multi-Higgs Models
Resumo:
Discoveries at the LHC will soon set the physics agenda for future colliders. This report of a CERN Theory Institute includes the summaries of Working Groups that reviewed the physics goals and prospects of LHC running with 10 to 300 fb(-1) of integrated luminosity, of the proposed sLHC luminosity upgrade, of the ILC, of CLIC, of the LHeC and of a muon collider. The four Working Groups considered possible scenarios for the first 10 fb(-1) of data at the LHC in which (i) a state with properties that are compatible with a Higgs boson is discovered, (ii) no such state is discovered either because the Higgs properties are such that it is difficult to detect or because no Higgs boson exists, (iii) a missing-energy signal beyond the Standard Model is discovered as in some supersymmetric models, and (iv) some other exotic signature of new physics is discovered. In the contexts of these scenarios, the Working Groups reviewed the capabilities of the future colliders to study in more detail whatever new physics may be discovered by the LHC. Their reports provide the particle physics community with some tools for reviewing the scientific priorities for future colliders after the LHC produces its first harvest of new physics from multi-TeV collisions.
Resumo:
The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Cosmopolitan ideals have been on the philosophical agenda for several millennia but the end of the Cold War started a new discussion on state sovereignty, global democracy, the role of international law and global institutions. The Westphalian state system in practice since the 17th century is transforming and the democracy deficit needs new solutions. An impetus has been the fact that in the present world, an international body representing global citizens does not exist. In this Master’s thesis, the possibility of establishing a world parliament is examined. In a case analysis, 17 models on world parliament from two journals, a volume of essays and two other publications are discussed. Based on general observations, the models are divided into four thematic groups. The models are analyzed with an emphasis on feasible and probable elements. Further, a new scenario with a time frame of thirty years is proposed based on the methodology of normative futures studies, taking special interest in causal relationships and actions leading to change. The scenario presents three gradual steps that each need to be realized before a sustainable world parliament is established. The theoretical framework is based on social constructivism, and changes in international and multi-level governance are examined with the concepts of globalization, democracy and sovereignty. A feasible, desirable and credible world parliament is constituted gradually by implying electoral, democratic and legal measures for members initially from exclusively democratic states, parliamentarians, non-governmental organizations and other groups. The parliament should be located outside the United Nations context, since a new body avoids the problem of inefficiency currently prevailing in the UN. The main objectives of the world parliament are to safeguard peace and international law and to offer legal advice in cases when international law has been violated. A feasible world parliament is advisory in the beginning but it is granted legislative powers in the future. The number of members in the world parliament could also be extended following the example of the EU enlargement process.
Resumo:
Existing models for dmax predict that, in the limit of μd → ∞, dmax increases with 3/4 power of μd. Further, at low values of interfacial tension, dmax becomes independent of σ even at moderate values of μd. However, experiments contradict both the predictions show that dmax dependence on μd is much weaker, and that, even at very low values of σ,dmax does not become independent of it. A model is proposed to explain these results. The model assumes that a drop circulates in a stirred vessel along with the bulk fluid and repeatedly passes through a deformation zone followed by a relaxation zone. In the deformation zone, the turbulent inertial stress tends to deform the drop, while the viscous stress generated in the drop and the interfacial stress resist deformation. The relaxation zone is characterized by absence of turbulent stress and hence the drop tends to relax back to undeformed state. It is shown that a circulating drop, starting with some initial deformation, either reaches a steady state or breaks in one or several cycles. dmax is defined as the maximum size of a drop which, starting with an undeformed initial state for the first cycle, passes through deformation zone infinite number of times without breaking. The model predictions reduce to that of Lagisetty. (1986) for moderate values of μd and σ. The model successfully predicts the reduced dependence of dmax on μd at high values of μd as well as the dependence of dmax on σ at low values of σ. The data available in literature on dmax could be predicted to a greater accuracy by the model in comparison with existing models and correlations.
Resumo:
Existing models for dmax predict that, in the limit of μd → ∞, dmax increases with 3/4 power of μd. Further, at low values of interfacial tension, dmax becomes independent of σ even at moderate values of μd. However, experiments contradict both the predictions show that dmax dependence on μd is much weaker, and that, even at very low values of σ,dmax does not become independent of it. A model is proposed to explain these results. The model assumes that a drop circulates in a stirred vessel along with the bulk fluid and repeatedly passes through a deformation zone followed by a relaxation zone. In the deformation zone, the turbulent inertial stress tends to deform the drop, while the viscous stress generated in the drop and the interfacial stress resist deformation. The relaxation zone is characterized by absence of turbulent stress and hence the drop tends to relax back to undeformed state. It is shown that a circulating drop, starting with some initial deformation, either reaches a steady state or breaks in one or several cycles. dmax is defined as the maximum size of a drop which, starting with an undeformed initial state for the first cycle, passes through deformation zone infinite number of times without breaking. The model predictions reduce to that of Lagisetty. (1986) for moderate values of μd and σ. The model successfully predicts the reduced dependence of dmax on μd at high values of μd as well as the dependence of dmax on σ at low values of σ. The data available in literature on dmax could be predicted to a greater accuracy by the model in comparison with existing models and correlations.
Resumo:
Simulation is an important means of evaluating new microarchitectures. With the invention of multi-core (CMP) platforms, simulators are becoming larger and more complex. However, with the availability of CMPs with larger caches and higher operating frequency, the wall clock time required for simulating an application has become comparatively shorter. Reducing this simulation time further is a great challenge, especially in the case of multi-threaded workload due to indeterminacy introduced due to simultaneously executing various threads. In this paper, we propose a technique for speeding multi-core simulation. The model of the processor core and cache are replaced with functional models, to achieve speedup. A timed Petri net model is used to estimate the execution time of the processor and the memory access latencies are estimated using hit/miss information obtained from the functional model of the cache. This model can be used to predict performance of data parallel applications or multiprogramming workload on CMP platform with various cache hierarchies and shared bus interconnect. The error in estimation of the execution time of an application is within 6%. The speedup achieved ranges between an average of 2x--4x over the cycle accurate simulator.
Resumo:
We study the possible effects of CP violation in the Higgs sector on t (t) over bar production at a gammagamma collider. These studies are performed in a model-independent way in terms of six form factors {R(S-gamma), J(S-gamma), R(P-gamma), J(P-gamma), S-t, P-t} which parametrize the CP mixing in the Higgs sector, and a strategy for their determination is developed. We observe that the angular distribution of the decay lepton from t/(t) over bar produced in this process is independent of any CP violation in the tbW vertex and hence best suited for studying CP mixing in the Higgs sector. Analytical expressions are obtained for the angular distribution of leptons in the c.m. frame of the two colliding photons for a general polarization state of the incoming photons. We construct combined asymmetries in the initial state lepton (photon) polarization and the final state lepton charge. They involve CP even (x's) and odd (y's) combinations of the mixing parameters. We study limits up to which the values of x and y, with only two of them allowed to vary at a time, can be probed by measurements of these asymmetries, using circularly polarized photons. We use the numerical values of the asymmetries predicted by various models to discriminate among them. We show that this method can be sensitive to the loop-induced CP violation in the Higgs sector in the minimal supersymmetric standard model.
Resumo:
An integrated reservoir operation model is presented for developing effective operational policies for irrigation water management. In arid and semi-arid climates, owing to dynamic changes in the hydroclimatic conditions within a season, the fixed cropping pattern with conventional operating policies, may have considerable impact on the performance of the irrigation system and may affect the economics of the farming community. For optimal allocation of irrigation water in a season, development of effective mathematical models may guide the water managers in proper decision making and consequently help in reducing the adverse effects of water shortage and crop failure problems. This paper presents a multi-objective integrated reservoir operation model for multi-crop irrigation system. To solve the multi-objective model, a recent swarm intelligence technique, namely elitist-mutated multi-objective particle swarm optimisation (EM-MOPSO) has been used and applied to a case study in India. The method evolves effective strategies for irrigation crop planning and operation policies for a reservoir system, and thereby helps farming community in improving crop benefits and water resource usage in the reservoir command area.
Resumo:
Estimation of creep and shrinkage are critical in order to compute loss of prestress with time in order to compute leak tightness and assess safety margins available in containment structures of nuclear power plants. Short-term creep and shrinkage experiments have been conducted using in-house test facilities developed specifically for the present research program on 35 and 45 MPa normal concrete and 25 MPa heavy density concrete. The extensive experimental program for creep, has cylinders subject to sustained levels of load typically for several days duration (till negligible strain increase with time is observed in the creep specimen), to provide the total creep strain versus time curves for the two normal density concrete grades and one heavy density concrete grade at different load levels, different ages at loading, and at different relative humidity’s. Shrinkage studies on prism specimen for concrete of the same mix grades are also being studied. In the first instance, creep and shrinkage prediction models reported in the literature has been used to predict the creep and shrinkage levels in subsequent experimental data with acceptable accuracy. While macro-scale short experiments and analytical model development to estimate time dependent deformation under sustained loads over long term, accounting for the composite rheology through the influence of parameters such as the characteristic strength, age of concrete at loading, relative humidity, temperature, mix proportion (cement: fine aggregate: coarse aggregate: water) and volume to surface ratio and the associated uncertainties in these variables form one part of the study, it is widely believed that strength, early age rheology, creep and shrinkage are affected by the material properties at the nano-scale that are not well established. In order to understand and improve cement and concrete properties, investigation of the nanostructure of the composite and how it relates to the local mechanical properties is being undertaken. While results of creep and shrinkage obtained at macro-scale and their predictions through rheological modeling are satisfactory, the nano and micro indenting experimental and analytical studies are presently underway. Computational mechanics based models for creep and shrinkage in concrete must necessarily account for numerous parameters that impact their short and long term response. A Kelvin type model with several elements representing the influence of various factors that impact the behaviour is under development. The immediate short term deformation (elastic response), effects of relative humidity and temperature, volume to surface ratio, water cement ratio and aggregate cement ratio, load levels and age of concrete at loading are parameters accounted for in this model. Inputs to this model, such as the pore structure and mechanical properties at micro/nano scale have been taken from scanning electron microscopy and micro/nano-indenting of the sample specimen.
Resumo:
Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.
Resumo:
Signaling mechanisms involving protein tyrosine phosphatases govern several cellular and developmental processes. These enzymes are regulated by several mechanisms which include variation in the catalytic turnover rate based on redox stimuli, subcellular localization or protein-protein interactions. In the case of Receptor Protein Tyrosine Phosphatases (RPTPs) containing two PTP domains, phosphatase activity is localized in their membrane-proximal (D1) domains, while the membrane-distal (D2) domain is believed to play a modulatory role. Here we report our analysis of the influence of the D2 domain on the catalytic activity and substrate specificity of the D1 domain using two Drosophila melanogaster RPTPs as a model system. Biochemical studies reveal contrasting roles for the D2 domain of Drosophila Leukocyte antigen Related (DLAR) and Protein Tyrosine Phosphatase on Drosophila chromosome band 99A (PTP99A). While D2 lowers the catalytic activity of the D1 domain in DLAR, the D2 domain of PTP99A leads to an increase in the catalytic activity of its D1 domain. Substrate specificity, on the other hand, is cumulative, whereby the individual specificities of the D1 and D2 domains contribute to the substrate specificity of these two-domain enzymes. Molecular dynamics simulations on structural models of DLAR and PTP99A reveal a conformational rationale for the experimental observations. These studies reveal that concerted structural changes mediate inter-domain communication resulting in either inhibitory or activating effects of the membrane distal PTP domain on the catalytic activity of the membrane proximal PTP domain.
Resumo:
Long running multi-physics coupled parallel applications have gained prominence in recent years. The high computational requirements and long durations of simulations of these applications necessitate the use of multiple systems of a Grid for execution. In this paper, we have built an adaptive middleware framework for execution of long running multi-physics coupled applications across multiple batch systems of a Grid. Our framework, apart from coordinating the executions of the component jobs of an application on different batch systems, also automatically resubmits the jobs multiple times to the batch queues to continue and sustain long running executions. As the set of active batch systems available for execution changes, our framework performs migration and rescheduling of components using a robust rescheduling decision algorithm. We have used our framework for improving the application throughput of a foremost long running multi-component application for climate modeling, the Community Climate System Model (CCSM). Our real multi-site experiments with CCSM indicate that Grid executions can lead to improved application throughput for climate models.
Resumo:
Experimental conditions or the presence of interacting components can lead to variations in the structural models of macromolecules. However, the role of these factors in conformational selection is often omitted by in silico methods to extract dynamic information from protein structural models. Structures of small peptides, considered building blocks for larger macromolecular structural models, can substantially differ in the context of a larger protein. This limitation is more evident in the case of modeling large multi-subunit macromolecular complexes using structures of the individual protein components. Here we report an analysis of variations in structural models of proteins with high sequence similarity. These models were analyzed for sequence features of the protein, the role of scaffolding segments including interacting proteins or affinity tags and the chemical components in the experimental conditions. Conformational features in these structural models could be rationalized by conformational selection events, perhaps induced by experimental conditions. This analysis was performed on a non-redundant dataset of protein structures from different SCOP classes. The sequence-conformation correlations that we note here suggest additional features that could be incorporated by in silico methods to extract dynamic information from protein structural models.