969 resultados para Standard setting
Resumo:
We combine searches by the CDF and D0 collaborations for a Higgs boson decaying to W+W-. The data correspond to an integrated total luminosity of 4.8 (CDF) and 5.4 (D0) fb-1 of p-pbar collisions at sqrt{s}=1.96 TeV at the Fermilab Tevatron collider. No excess is observed above background expectation, and resulting limits on Higgs boson production exclude a standard-model Higgs boson in the mass range 162-166 GeV at the 95% C.L.
Resumo:
We present a search for standard model (SM) Higgs boson production using ppbar collision data at sqrt(s) = 1.96 TeV, collected with the CDF II detector and corresponding to an integrated luminosity of 4.8 fb-1. We search for Higgs bosons produced in all processes with a significant production rate and decaying to two W bosons. We find no evidence for SM Higgs boson production and place upper limits at the 95% confidence level on the SM production cross section (sigma(H)) for values of the Higgs boson mass (m_H) in the range from 110 to 200 GeV. These limits are the most stringent for m_H > 130 GeV and are 1.29 above the predicted value of sigma(H) for mH = 165 GeV.
Resumo:
"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."
Resumo:
We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.
Resumo:
In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
A low strain shear modulus plays a fundamental role in the estimation of site response parameters In this study an attempt has been made to develop the relationships between standard penetration test (SPT) N values with the low strain shear modulus (G(max)) For this purpose, field experiments SPT and multichannel analysis of surface wave data from 38 locations in Bangalore, India, have been used, which were also used for seismic microzonation project The in situ density of soil layer was evaluated using undisturbed soil samples from the boreholes Shear wave velocity (V-s) profiles with depth were obtained for the same locations or close to the boreholes The values for low strain shear modulus have been calculated using measured V-s and soil density About 215 pairs of SPT N and G(max) values are used for regression analysis The differences between fitted regression relations using measured and corrected values were analyzed It is found that an uncorrected value of N and modulus gives the best fit with a high regression coefficient when compared to corrected N and corrected modulus values This study shows better correlation between measured values of N and G(max) when compared to overburden stress corrected values of N and G(max)
Resumo:
Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.
Resumo:
The starting point of this thesis is the notion that in order for organisations to understand what customers value and how customers experience service, they need to learn about customers. The first and perhaps most important link in an organisation-wide learning process directed at customers is the frontline contact person. Service- and sales organisations can only learn about customers if the individual frontline contact persons learn about customers. Even though it is commonly recognised that learning about customers is the basis for an organisation’s success, few contributions within marketing investigate the fundamental nature of the phenomenon as it occurs in everyday customer service. Thus, what learning about customers is and how it takes place in a customer-service setting is an issue that is neglected in marketing research. In order to explore these questions, this thesis presents a socio-cultural approach to understanding learning about customers. Hence, instead of considering learning equal to cognitive processes in the mind of the frontline contact person or learning as equal to organisational information processing, the interactive, communication-based, socio-cultural aspect of learning about customers is brought to the fore. Consequently, the theoretical basis of the study can be found both in socio-cultural and practice-oriented lines of reasoning, as well as in the fields of service- and relationship marketing. As it is argued that learning about customers is an integrated part of everyday practices, it is also clear that it should be studied in a naturalistic and holistic way as it occurs in a customer-service setting. This calls for an ethnographic research approach, which involves direct, first-hand experience of the research setting during an extended period of time. Hence, the empirical study employs participant observations, informal discussions and interviews among car salespersons and service advisors at a car retailing company. Finally, as a synthesis of theoretically and empirically gained understanding, a set of concepts are developed and they are integrated into a socio-cultural model of learning about customers.
Resumo:
In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.
Resumo:
This paper discusses mentoring from the mentors' point of view in an entrepreneurial setting. The aim of the paper is to present why mentoring can be considered important for entrepreneurs who are mentors and under what circumstances mentoring is valuable for the mentor. A pilot mentorprogramme was conducted among women entrepreneurs during 1998. A study was made in order to examine the mentors’ perception of the programme. Firstly mentoring and entrepreneurship in Finland are discussed briefly. Secondly the results of the study are presented. The results of the study show that mentoring can be valuable for the mentors both on a vocational and a personal level. However, it is important to choose the mentees of the programme on a rather strict basis. The results demonstrate a need to be careful in choosing mentees.
Resumo:
We propose a novel algorithm for placement of standard cells in VLSI circuits based on an analogy of this problem with neural networks. By employing some of the organising principles of these nets, we have attempted to improve the behaviour of the bipartitioning method as proposed by Kernighan and Lin. Our algorithm yields better quality placements compared with the above method, and also makes the final placement independent of the initial partition.
Resumo:
This thesis explores the particular framework of evidentiary assessment of three selected appellate national asylum procedures in Europe and discusses the relationship between these procedures, on the one hand, and between these procedures and other legal systems, including the EU legal order and international law, on the other. A theme running throughout the thesis is the EU strivings towards approximation of national asylum procedures and my study analyses the evidentiary assessment of national procedures with the aim of pinpointing similarities and differences, and the influences which affect these distinctions. The thesis first explores the frames construed for national evidentiary solutions by studying the object of decision-making and the impact of legal systems outside the national. Second, the study analyses the factual evidentiary assessment of three national procedures - German, Finnish and English. Thirdly, the study explores the interrelationship between these procedures and the legal systems influencing them and poses questions in relation to the strivings of EU and methods of convergence. The thesis begins by stating the framework and starting points for the research. It moves on to establish keys of comparison concerning four elements of evidentiary assessment that are of importance to any appellate asylum procedure, and that can be compared between national procedures, on the one hand, and between international, regional and national frameworks, on the other. Four keys of comparison are established: the burden of proof, demands for evidentiary robustness, the standard of proof and requirements for the methods of evidentiary assessment. These keys of comparison are then identified in three national appellate asylum procedures, and in order to come to conclusions on the evidentiary standards of the appellate asylum procedures, relevant elements of the asylum procedures in general are presented. Further, institutional, formal and procedural matters which have an impact on the evidentiary standards in the national appellate procedures are analysed. From there, the thesis moves on to establish the relationship between national evidentiary standards and the legal systems which affect them, and gives reasons for similarities and divergences. Further, the thesis studies the impact of the national frameworks on the regional and international level. Lastly, the dissertation makes a de lege ferenda survey of the relationship between EU developments, the goal of harmonization in relation to national asylum procedures and the particular feature of evidentiary standards in national appellate asylum procedures. Methodology The thesis follows legal dogmatic methods. The aim is to analyse legal norms and legal constructions and give them content and context. My study takes as its outset an understanding of the purposes for legal research also regarding evidence and asylum to determine the contents of valid law through analysis and systematization. However, as evidentiary issues traditionally are normatively vaguely defined, a strict traditional normative dogmatic approach is not applied. For the same reason a traditionalist and strict legal positivism is not applied. The dogmatics applied to the analysis of the study is supported by practical analysis. The aim is not only to reach conclusions concerning the contents of legal norms and the requirements of law, but also to study the use and practical functioning of these norms, giving them a practcial context. Further, the study relies on a comparative method. A functionalist comparative method is employed and keys of comparison are found in evidentiary standards of three selected national appellate asylum procedures. The functioning equivalences of German, Finnish and English evidentiary standards of appellate asylum procedures are compared, and they are positioned in an European and international legal setting. Research Results The thesis provides results regarding the use of evidence in national appellate asylum procedures. It is established that evidentiary solutions do indeed impact on the asylum procedure and that the results of the procedure are dependent on the evidentiary solutions made in the procedures. Variations in, amongst other things, the interpretation of the burden of proof, the applied standard of proof and the method for determining evidentiary value, are analysed. It is established that national impacts play an important role in the adaptation of national appellate procedures to external requirements. Further, it is established that the impact of national procedures on as well the international framework as on EU law varies between the studied countries, partly depending on the position of the Member State in legislative advances at the EU level. In this comparative study it is, further, established that the impact of EU requirements concerning evidentiary issues may be have positive as well as negative effects with regard to the desired harmonization. It is also concluded that harmonization using means of convergence that primaly target legal frameworks may not in all instances be optimal in relation to evidentiary standards, and that more varied and pragmatic means of convergence must be introduced in order to secure harmonization also in terms of evidence. To date, legal culture and traditions seem to prevail over direct efforts at procedural harmonization.