22 resultados para standard setting

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finnish education policy, educational legislation and the entire education system changed significantly during the 1990s as part of a general restructuring of public administration. There has been a clear divergence from the former tradition of a system of regulation, founded on detailed legislation and the principle of equality. The new governance, which is based more on individual choice, efficiency and evaluation, emphasizes that the development of a high standard of education is a necessity in the light of global competition. This study explores the legislative process regarding education policy in the Finnish Parliament during the 1990s, and highlights in particular how the international discourse on education policies was restructured in the context of Finnish legislation. The research material consists of all the public parliamentary documents relating to education, including government proposals, minutes from the discussions in the chamber and archive material (final protocols, reports and statements) for the Committee for Education and Culture. The discourse on the process of drafting and passing education legislation is modelled on three interrelated policy technologies (market, management and performance), which are understood here as mechanisms connecting general political ideas to normative legislation. The changes in the regulation of education were part of a general public administration reform instigated during the mid 1980s. The research results will prove that during the left-right coalition cabinet of PM Harri Holkeri, new policy technologies affected the parliamentary discourse on education policy. This was particularly influenced by a change in the preconditions for the management of education that was created as a result of the numerous demands to deregulate and delegate decision-making authority to the local and school levels while rendering the whole education system more effective. At the turn of the decade, market-type mechanisms were more indirectly manifested in the forms of individuality and freedom of choice, which were reflected, for example, in proposals to “lower the hurdles” by separating general from vocational secondary education with a view to encouraging students to select courses from other educational establishments, in addition to relaxing the requirements for establishing private schools and abolishing a hundred-year-old strict national catchment-area system. Later, in the course of the 1990s, after the subjects, players, and methods of evaluation had been more precisely defined, evaluation based on performance would result in the active measurement of the attainment of set objectives. In the spring of 1991, from the outset of PM Esko Aho's right-centre coalition cabinet, the education budget suffered cutbacks as a result of a global recession and this influenced the legislative work of, and discourses in, parliament. Representatives of the parties in power regarded the recession solely as an external factor that was remote from the political arena. In their view, the education system should rise to the challenge by ensuring the efficient and innovative use of the resources available and by developing new forms of indicators for evaluating results. Representatives of the opposition opposed the cabinet’s standpoint as a result of the recession, criticized the measures taken by pointing out the harmful effect of constantly cutting the budget and argued that the government had made political capital out of the recession by using it as an opportunity to give more room to market, management and performance technologies within the Finnish education system. Criticism of the new education policy became even stronger during PM Paavo Lipponen's first “rainbow” coalition cabinet with critical views being expressed not only from the opposition but also from representatives within the government. Representatives from the left demanded legislative restrictions and the instigation of measures to relieve the presumed negative effects of market, management and performance in the name of educational equality. The new management by results steering method within the university sector and the introduction of commercial education services in compulsory education were fiercely criticized. The argument over “setting outer limits” including, for example, the demands for more detailed legislation and earmarked state subsidies was characteristic of Parliament’s legislative discourse in the latter part of the 1990s. Keywords: education policy, education legislation, Parliament of Finland

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hereditary nonpolyposis colorectal cancer (HNPCC) is an inherited cancer predisposition syn-drome characterized by early onset colorectal cancer (CRC) and several other extra-colonic cancers, most commonly endometrial cancer (EC) and gastric cancer. Our aim was to evaluate the efficiency and results of the ongoing CRC and EC surveillance programs and to investigate the grounds for future gastric cancer screening by comparing the gastric biopsies of mutation positive and negative siblings in search for premalignant lesions. We also compared a new surveillance method, computerized tomographic colonoscopy (CTC) with optic colonoscopy. The patient material consisted of 579 family members from 111 Finnish HNPCC families al-most all harboring a known mismatch repair gene mutation. The efficacy of CRC and EC surveillance programs on HNPCC patients was evaluated by comparing the stage and survival of cancer cases detected with surveillance versus without. The performance of a new technique, CTC, was explored using a same-day colonoscopy as a reference standard. The use of intrauterine aspiration biopsies for EC surveillance was intro-duced for the first time in a HNPCC setting. Upper GI endoscopies were performed and biop-sies taken from mutation carriers and their mutation-negative siblings. The present surveillance program for CRC proved to be efficient. The CRC cases detected by surveillance were at a significantly more favorable stage than those in the non-surveilled group. This advantage was reflected in a significantly higher CRC-specific survival in the sur-veilled group. The performance of a new technique, CTC was found insufficient for polyp detection in this population in which every polyp, no matter the size, should be detected and removed. Colono-scopy was confirmed as a better surveillance modality than CTC. We could not observe any of the assumed differences in the gastric mucosa from mutation carriers and their mutation-negative siblings and no cases of gastric cancer were detected. The results gave no support for gastric surveillance. The EC surveillance program (transvaginal ultrasound and intra-uterine biopsy every 2-3 years) seemed to be efficient. It yielded several asymptomatic cancer cases and premalignant lesions. The stage distribution of the endometrial cancers in the group under surveillance tended to be more favorable than that of the mutation-positive, symptomatic EC patients who had no surveillance. None of the surveilled EC patients died of EC compared to six in the non-surveilled patients during the follow up. The improvement was, however, not statistically sig-nificant, thus far. Another observation was the good performance of endometrial aspiration biopsies used in this setting for the first time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thrombin is a multifunctional protease, which has a central role in the development and progression of coronary atherosclerotic lesions and it is a possible mediator of myocardial ischemia-reperfusion injury. Its generation and procoagulant activity are greatly upregulated during cardiopulmonary bypass (CPB). On the other hand, activated protein C, a physiologic anticoagulant that is activated by thrombomodulin-bound thrombin, has been beneficial in various models of ischemia-reperfusion. Therefore, our aim in this study was to test whether thrombin generation or protein C activation during coronary artery bypass grafting (CABG) associate with postoperative myocardial damage or hemodynamic changes. To further investigate the regulation of thrombin during CABG, we tested whether preoperative thrombophilic factors associate with increased CPB-related generation of thrombin or its procoagulant activity. We also measured the anticoagulant effects of heparin during CPB with a novel coagulation test, prothrombinase-induced clotting time (PiCT), and compared the performance of this test with the present standard of laboratory-based anticoagulation monitoring. One hundred patients undergoing elective on-pump CABG were studied prospectively. A progressive increase in markers of thrombin generation (F1+2), fibrinolysis (D-dimer), and fibrin formation (soluble fibrin monomer complexes) was observed during CPB, which was further distinctly propagated by reperfusion after myocardial ischemia, and continued to peak after the neutralization of heparin with protamine. Thrombin generation during reperfusion after CABG associated with postoperative myocardial damage and increased pulmonary vascular resistance. Activated protein C levels increased only slightly during CPB before the release of the aortic clamp, but reperfusion and more significantly heparin neutralization caused a massive increase in activated protein C levels. Protein C activation was clearly delayed in relation to both thrombin generation and fibrin formation. Even though activated protein C associated dynamically with postoperative hemodynamic performance, it did not associate with postoperative myocardial damage. Preoperative thrombophilic variables did not associate with perioperative thrombin generation or its procoagulant activity. Therefore, our results do not favor routine thrombophilia screening before CABG. There was poor agreement between PiCT and other measurements of heparin effects in the setting of CPB. However, lower heparin levels during CPB associated with inferior thrombin control and high heparin levels during CPB associated with fewer perioperative transfusions of blood products. Overall, our results suggest that hypercoagulation after CABG, especially during reperfusion, might be clinically important.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arguments arising from quantum mechanics and gravitation theory as well as from string theory, indicate that the description of space-time as a continuous manifold is not adequate at very short distances. An important candidate for the description of space-time at such scales is provided by noncommutative space-time where the coordinates are promoted to noncommuting operators. Thus, the study of quantum field theory in noncommutative space-time provides an interesting interface where ordinary field theoretic tools can be used to study the properties of quantum spacetime. The three original publications in this thesis encompass various aspects in the still developing area of noncommutative quantum field theory, ranging from fundamental concepts to model building. One of the key features of noncommutative space-time is the apparent loss of Lorentz invariance that has been addressed in different ways in the literature. One recently developed approach is to eliminate the Lorentz violating effects by integrating over the parameter of noncommutativity. Fundamental properties of such theories are investigated in this thesis. Another issue addressed is model building, which is difficult in the noncommutative setting due to severe restrictions on the possible gauge symmetries imposed by the noncommutativity of the space-time. Possible ways to relieve these restrictions are investigated and applied and a noncommutative version of the Minimal Supersymmetric Standard Model is presented. While putting the results obtained in the three original publications into their proper context, the introductory part of this thesis aims to provide an overview of the present situation in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for mutual recognition of accurate measurement results made by competent laboratories has been very widely accepted at the international level e.g., at the World Trade Organization. A partial solution to the problem was made by the International Committee for Weights and Measures (CIPM) in setting up the Mutual Recognition Arrangement (CIPM MRA), which was signed by National Metrology Institutes (NMI) around the world. The core idea of the CIPM MRA is to have global arrangements for the mutual acceptance of the calibration certificates of National Metrology Institutes. The CIPM MRA covers all the fields of science and technology for which NMIs have their national standards. The infrastructure for the metrology of the gaseous compounds carbon monoxide (CO), nitrogen monoxide (NO), nitrogen dioxide (NO2), sulphur dioxide (SO2) and ozone (O3) has been constructed at the national level at the Finnish Meteorological Institute (FMI). The calibration laboratory at the FMI was constructed for providing calibration services for air quality measurements and to fulfil the requirements of a metrology laboratory. The laboratory successfully participated, with good results, in the first comparison project, which was aimed at defining the state of the art in the preparation and analysis of the gas standards used by European metrology institutes and calibration laboratories in the field of air quality. To confirm the competence of the laboratory, the international external surveillance study was conducted at the laboratory. Based on the evidence, the Centre for Metrology and Accreditation (MIKES) designated the calibration laboratory at the Finnish Meteorological Institute (FMI) as a National Standard Laboratory in the field of air quality. With this designation, the MIKES-FMI Standards Laboratory became a member of CIPM MRA, and Finland was brought into the internationally-accepted forum in the field of gas metrology. The concept of ‘once measured - everywhere accepted’ is the leading theme of the CIPM MRA. The calibration service of the MIKES-FMI Standards Laboratory realizes the SI traceability system for the gas components, and is constructed to enable it to meet the requirements of the European air quality directives. In addition, all the relevant uncertainty sources that influence the measurement results have been evaluated, and the uncertainty budgets for the measurement results have been created.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We combine searches by the CDF and D0 collaborations for a Higgs boson decaying to W+W-. The data correspond to an integrated total luminosity of 4.8 (CDF) and 5.4 (D0) fb-1 of p-pbar collisions at sqrt{s}=1.96 TeV at the Fermilab Tevatron collider. No excess is observed above background expectation, and resulting limits on Higgs boson production exclude a standard-model Higgs boson in the mass range 162-166 GeV at the 95% C.L.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a search for standard model (SM) Higgs boson production using ppbar collision data at sqrt(s) = 1.96 TeV, collected with the CDF II detector and corresponding to an integrated luminosity of 4.8 fb-1. We search for Higgs bosons produced in all processes with a significant production rate and decaying to two W bosons. We find no evidence for SM Higgs boson production and place upper limits at the 95% confidence level on the SM production cross section (sigma(H)) for values of the Higgs boson mass (m_H) in the range from 110 to 200 GeV. These limits are the most stringent for m_H > 130 GeV and are 1.29 above the predicted value of sigma(H) for mH = 165 GeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.