974 resultados para Non-Standard Model Higgs bosons
Resumo:
The thesis begins with a review of basic elements of general theory of relativity (GTR) which forms the basis for the theoretical interpretation of the observations in cosmology. The first chapter also discusses the standard model in cosmology, namely the Friedmann model, its predictions and problems. We have also made a brief discussion on fractals and inflation of the early universe in the first chapter. In the second chapter we discuss the formulation of a new approach to cosmology namely a stochastic approach. In this model, the dynam ics of the early universe is described by a set of non-deterministic, Langevin type equations and we derive the solutions using the Fokker—Planck formalism. Here we demonstrate how the problems with the standard model, can be eliminated by introducing the idea of stochastic fluctuations in the early universe. Many recent observations indicate that the present universe may be approximated by a many component fluid and we assume that only the total energy density is conserved. This, in turn, leads to energy transfer between different components of the cosmic fluid and fluctuations in such energy transfer can certainly induce fluctuations in the mean to factor in the equation of state p = wp, resulting in a fluctuating expansion rate for the universe. The third chapter discusses the stochastic evolution of the cosmological parameters in the early universe, using the new approach. The penultimate chapter is about the refinements to be made in the present model, by means of a new deterministic model The concluding chapter presents a discussion on other problems with the conventional cosmology, like fractal correlation of galactic distribution. The author attempts an explanation for this problem using the stochastic approach.
Resumo:
Presentation at the 1997 Dagstuhl Seminar "Evaluation of Multimedia Information Retrieval", Norbert Fuhr, Keith van Rijsbergen, Alan F. Smeaton (eds.), Dagstuhl Seminar Report 175, 14.04. - 18.04.97 (9716). - Abstract: This presentation will introduce ESCHER, a database editor which supports visualization in non-standard applications in engineering, science, tourism and the entertainment industry. It was originally based on the extended nested relational data model and is currently extended to include object-relational properties like inheritance, object types, integrity constraints and methods. It serves as a research platform into areas such as multimedia and visual information systems, QBE-like queries, computer-supported concurrent work (CSCW) and novel storage techniques. In its role as a Visual Information System, a database editor must support browsing and navigation. ESCHER provides this access to data by means of so called fingers. They generalize the cursor paradigm in graphical and text editors. On the graphical display, a finger is reflected by a colored area which corresponds to the object a finger is currently pointing at. In a table more than one finger may point to objects, one of which is the active finger and is used for navigating through the table. The talk will mostly concentrate on giving examples for this type of navigation and will discuss some of the architectural needs for fast object traversal and display. ESCHER is available as public domain software from our ftp site in Kassel. The portable C source can be easily compiled for any machine running UNIX and OSF/Motif, in particular our working environments IBM RS/6000 and Intel-based LINUX systems. A porting to Tcl/Tk is under way.
Resumo:
This thesis describes the development of a model-based vision system that exploits hierarchies of both object structure and object scale. The focus of the research is to use these hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to recognize parameterized instances of non-rigid model objects contained in a large knowledge base despite the presence of noise and occlusion. Robustness is achieved by developing a system that can recognize viewed objects that are scaled or mirror-image instances of the known models or that contain components sub-parts with different relative scaling, rotation, or translation than in models. The approach taken in this thesis is to develop an object shape representation that incorporates a component sub-part hierarchy- to allow for efficient and correct indexing into an automatically generated model library as well as for relative parameterization among sub-parts, and a scale hierarchy- to allow for a general to specific recognition procedure. After analysis of the issues and inherent tradeoffs in the recognition process, a system is implemented using a representation based on significant contour curvature changes and a recognition engine based on geometric constraints of feature properties. Examples of the system's performance are given, followed by an analysis of the results. In conclusion, the system's benefits and limitations are presented.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
Using a unique neighborhood crime dataset for Bogotá in 2011, this study uses a spatial econometric approach and examines the role of socioeconomic and agglomeration variables in explaining the variance of crime. It uses two different types of crime, violent crime represented in homicides and property crime represented in residential burglaries. These two types of crime are then measured in non-standard crime statistics that are created as the area incidence for each crime in the neighborhood. The existence of crime hotspots in Bogotá has been shown in most of the literature, and using these non-standard crime statistics at this neighborhood level some hotspots arise again, thus validating the use of a spatial approach for these new crime statistics. The final specification includes socioeconomic, agglomeration, land-use and visual aspect variables that are then included in a SARAR model an estimated by the procedure devised by Kelejian and Prucha (2009). The resulting coefficients and marginal effects show the relevance of these crime hotspots which is similar with most previous studies. However, socioeconomic variables are significant and show the importance of age, and education. Agglomeration variables are significant and thus more densely populated areas are correlated with more crime. Interestingly, both types of crimes do not have the same significant covariates. Education and young male population have a different sign for homicide and residential burglaries. Inequality matters for homicides while higher real estate valuation matters for residential burglaries. Finally, density impacts positively both crimes.
Resumo:
This paper reports the findings of a small-scale research project, which investigated the levels of awareness and knowledge of written standard English of 10- and 11-year-old children in two English primary schools over a six-year period, coinciding with the implementation in the schools of the National Literacy Strategy (NLS). A questionnaire was used to provide quantitative and qualitative data relating to: features of writing which were recognised as standard or non-standard; children's understanding of technical terminology; variations between boys' and girls' performance; and the impact of the NLS over time. The findings reveal variations in levels of recognition of different non-standard features, differences between girls' and boys' recognition, possible examples of language change, but no evidence of a positive impact of the NLS. The implications of these findings are discussed both in terms of changes in educational standards and changes to standard English.
Resumo:
This paper reports the findings of a small-scale research project which investigated the levels of awareness and knowledge of written standard English of 10 and 11 year old children in two English primary schools. The project involved repeating in 2010 a written questionnaire previously used with children in the same schools in three separate surveys in 1999, 2002 and 2005. Data from the latest survey are compared to those from the previous three. The analysis seeks to identify any changes over time in children’s ability to recognise non-standard forms and supply standard English alternatives, as well as their ability to use technical terms related to language variation. Differences between the performance of boys and girls and that of the two schools are also analysed. The paper concludes that the socio-economic context of the schools may be a more important factor than gender in variations over time identified in the data.
Resumo:
We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
We analyze the potential of the CERN Large Hadron Collider running at 7 TeV to search for deviations from the Standard Model predictions for the triple gauge boson coupling ZW(+)W(-) assuming an integrated luminosity of 1 fb(-1). We show that the study of W(+)W(-) and W(+/-)Z productions, followed by the leptonic decay of the weak gauge bosons can improve the present sensitivity on the anomalous couplings Delta g(1)(Z), Delta kappa(Z), lambda(Z), g(4)(Z), and (lambda) over bar (Z) at the 2 sigma level. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Topological interactions will be generated in theories with compact extra dimensions where fermionic chiral zero modes have different localizations. This is the case in many warped extra dimension models where the right-handed top quark is typically localized away from the left-handed one. Using deconstruction techniques, we study the topological interactions in these models. These interactions appear as trilinear and quadrilinear gauge boson couplings in low energy effective theories with three or more sites, as well as in the continuum limit. We derive the form of these interactions for various cases, including examples of Abelian, non-Abelian and product gauge groups of phenomenological interest. The topological interactions provide a window into the more fundamental aspects of these theories and could result in unique signatures at the Large Hadron Collider, some of which we explore.
Resumo:
We show that the S parameter is not finite in theories of electroweak symmetry breaking in a slice of anti-de Sitter five-dimensional space, with the light fermions localized in the ultraviolet. We compute the one-loop contributions to S from the Higgs sector and show that they are logarithmically dependent on the cutoff of the theory. We discuss the renormalization of S, as well as the implications for bounds from electroweak precision measurements on these models. We argue that, although in principle the choice of renormalization condition could eliminate the S parameter constraint, a more consistent condition would still result in a large and positive S. On the other hand, we show that the dependence on the Higgs mass in S can be entirely eliminated by the renormalization procedure, making it impossible in these theories to extract a Higgs mass bound from electroweak precision constraints.
Resumo:
We study the production and signatures of doubly charged Higgs bosons (DCHBs) in the process gamma gamma <-> H(--)H(++) at the e(-)e(+) International Linear Collider and CERN Linear Collider, where the intermediate photons are given by the Weizsacker-Willians and laser backscattering distributions.
Resumo:
Conventional control strategies used in shunt active power filters (SAPF) employs real-time instantaneous harmonic detection schemes which is usually implements with digital filters. This increase the number of current sensors on the filter structure which results in high costs. Furthermore, these detection schemes introduce time delays which can deteriorate the harmonic compensation performance. Differently from the conventional control schemes, this paper proposes a non-standard control strategy which indirectly regulates the phase currents of the power mains. The reference currents of system are generated by the dc-link voltage controller and is based on the active power balance of SAPF system. The reference currents are aligned to the phase angle of the power mains voltage vector which is obtained by using a dq phase locked loop (PLL) system. The current control strategy is implemented by an adaptive pole placement control strategy integrated to a variable structure control scheme (VS-APPC). In the VS-APPC, the internal model principle (IMP) of reference currents is used for achieving the zero steady state tracking error of the power system currents. This forces the phase current of the system mains to be sinusoidal with low harmonics content. Moreover, the current controllers are implemented on the stationary reference frame to avoid transformations to the mains voltage vector reference coordinates. This proposed current control strategy enhance the performance of SAPF with fast transient response and robustness to parametric uncertainties. Experimental results are showing for determining the effectiveness of SAPF proposed control system