41 resultados para analytic method
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.
Resumo:
Controlled nuclear fusion is one of the most promising sources of energy for the future. Before this goal can be achieved, one must be able to control the enormous energy densities which are present in the core plasma in a fusion reactor. In order to be able to predict the evolution and thereby the lifetime of different plasma facing materials under reactor-relevant conditions, the interaction of atoms and molecules with plasma first wall surfaces have to be studied in detail. In this thesis, the fundamental sticking and erosion processes of carbon-based materials, the nature of hydrocarbon species released from plasma-facing surfaces, and the evolution of the components under cumulative bombardment by atoms and molecules have been investigated by means of molecular dynamics simulations using both analytic potentials and a semi-empirical tight-binding method. The sticking cross-section of CH3 radicals at unsaturated carbon sites at diamond (111) surfaces is observed to decrease with increasing angle of incidence, a dependence which can be described by a simple geometrical model. The simulations furthermore show the sticking cross-section of CH3 radicals to be strongly dependent on the local neighborhood of the unsaturated carbon site. The erosion of amorphous hydrogenated carbon surfaces by helium, neon, and argon ions in combination with hydrogen at energies ranging from 2 to 10 eV is studied using both non-cumulative and cumulative bombardment simulations. The results show no significant differences between sputtering yields obtained from bombardment simulations with different noble gas ions. The final simulation cells from the 5 and 10 eV ion bombardment simulations, however, show marked differences in surface morphology. In further simulations the behavior of amorphous hydrogenated carbon surfaces under bombardment with D^+, D^+2, and D^+3 ions in the energy range from 2 to 30 eV has been investigated. The total chemical sputtering yields indicate that molecular projectiles lead to larger sputtering yields than atomic projectiles. Finally, the effect of hydrogen ion bombardment of both crystalline and amorphous tungsten carbide surfaces is studied. Prolonged bombardment is found to lead to the formation of an amorphous tungsten carbide layer, regardless of the initial structure of the sample. In agreement with experiment, preferential sputtering of carbon is observed in both the cumulative and non-cumulative simulations
Resumo:
Holistic physics education in upper secondary level based on the optional course of physics Keywords: physics education, education, holistic, curriculum, world view, values A physics teacher s task is to put into practice all goals of the curriculum. Holistic physics education means in this research teaching, in which the school s common educational goals and the goals particular to the physics curriculum are taken into account. These involve knowledge, skills and personal value and attitude goals. Research task was to clarify how the educational goals involving student s values and attitudes can be carried out through the subject content of physics. How does the physics teacher communicate the modern world view through the content of the physics class? The goal of this research was to improve teaching, to find new points of view and to widen the perspective on how physics is taught. The teacher, who acted also as a researcher, planned and delivered an optional course where she could study the possibilities of holistic physics education. In 2001-2002 ten girls and two boys of the grade 9th class participated in that elective course. According to principles of action research the teacher-researcher reflected also on her own teaching action. Research method was content analysis that involved both analyzing student feedback, and relevant features of the teacher s knowledge, which are needed for planning and giving the physics lessons. In this research that means taking into account the subject matter knowledge, curriculum, didactic and the pedagogical content knowledge of the teacher. The didactic includes the knowledge of the learning process, students motivation, specific features of the physics didactics and the research of physics education. Among other things, the researcher constructed the contents of the curriculum and abstracted sentences as keywords, from which she drew a concept map. The concept maps, for instance, the map of educational goals and the mapping of the physics essence, were tools for studying contents which are included in the holistic physics education. Moreover, conclusions were reached concerning the contents of physics domains by which these can be achieved. According to this research, the contents employing the holistic physics education is as follows: perception, the essence of science, the development of science, new research topics and interactions in physics. The starting point of teaching should be connected with the student s life experiences and the approach to teaching should be broadly relevant to those experiences. The teacher-researcher observed and analyzed the effects of the experimental physics course, through the lens of a holistic physics education. The students reported that the goals of holistic physics education were achieved in the course. The discourses of the students indicated that in the experimental course they could express their opinions and feelings and make proposals and evaluations. The students had experiences about chances to affect the content of the course, and they considered the philosophical physics course interesting, it awakened questions, increased their self-esteem and helped them to become more aware of their world views. The students analytic skills developed in the interactive learning environment. The physics teacher needs broad knowledge for planning his or her teaching, which is evaluated in this research from contents maps made for the tools of the teaching. In the holistic physics education the teacher needs an open and curious mind and skills for interaction in teaching. This research indicates the importance of teaching physics in developing attitudes and values beside substance of the physics in class environment. The different points of view concerning human beings life make it possible to construct the modern world view of the students and to develop analytic skills and the self-esteem and thus help them in learning. Overall and wide points of view also help to transfer knowledge to practice. Since such contents is not employed by teaching the physics included in the standard curriculum, supplement relevant teaching material that includes such topics are needed.
Resumo:
The research focuses on client plan in the field of health care and social work on families with children. The purpose of the plan is to create objectives for helping the client and to assist in coordinating the ever-increasing multi-professional work. In general, the plan is understood in terms of assignments and as a contract specifying what to do in client cases. Taking this into consideration, the plan is outsourced into a written document. Instead of understanding the plan as a tool that stabilizes the objectives of action, documents it and facilitates evaluation, the client plan is conceptualized in this study as a practice. This kind of practice mediates client work as being itself also a process of action that focuses on an object whose gradual emergence and definition is the central question in multi-professional collaboration with a client. The plan is examined empirically in a non-stabilized state which leads to the research methodology being based on the dynamics between stabilization and emerging, non-stabilized entities the co-creation and formulation of practice and context. The theoretical approach of the research is the micro analytic approach of activity theory (Engeström R. 1999b). Grounding on this, the research develops a method of qualitative analysis which follows an emerging object with multiple voices. The research data is composed of the videotaped sessions from client meetings with three families, the interviews with the client and the workers as well as client documents that are used to follow up on client processes for at least one year. The research questions are as follows: 1) How is the client plan constructed between the client and different professional agents? 2) How are meanings constructed in a client-centred plan? 3) What are the elements of client-employee relationships that support the co-configuration necessitated by the changes in the client s everyday life? The study shows that the setting of objectives were limited by the palette of institutional services, which caused that the clients interpretations and acts of giving meaning to the kinds of help that was required were left out of the plan. Conceptually, the distinctions between client-centred and client-specific ways of working as well as an action-based working method are addressed. Central to this action-based approach is construing the everyday life of the client, recognizing different meanings and analyzing them together with the client as well as focusing attention on developing the prerequisites for social agency of the clients. The research portrays the elements for creating an action-based client plan. Key words: client plan, user perspective, multi-voiced meaning, multi-professional social work with children and families, agency
Resumo:
We present a search for associated production of the standard model (SM) Higgs boson and a $Z$ boson where the $Z$ boson decays to two leptons and the Higgs decays to a pair of $b$ quarks in $p\bar{p}$ collisions at the Fermilab Tevatron. We use event probabilities based on SM matrix elements to construct a likelihood function of the Higgs content of the data sample. In a CDF data sample corresponding to an integrated luminosity of 2.7 fb$^{-1}$ we see no evidence of a Higgs boson with a mass between 100 GeV$/c^2$ and 150 GeV$/c^2$. We set 95% confidence level (C.L.) upper limits on the cross-section for $ZH$ production as a function of the Higgs boson mass $m_H$; the limit is 8.2 times the SM prediction at $m_H = 115$ GeV$/c^2$.
Resumo:
In this thesis I examine one commonly used class of methods for the analytic approximation of cellular automata, the so-called local cluster approximations. This class subsumes the well known mean-field and pair approximations, as well as higher order generalizations of these. While a straightforward method known as Bayesian extension exists for constructing cluster approximations of arbitrary order on one-dimensional lattices (and certain other cases), for higher-dimensional systems the construction of approximations beyond the pair level becomes more complicated due to the presence of loops. In this thesis I describe the one-dimensional construction as well as a number of approximations suggested for higher-dimensional lattices, comparing them against a number of consistency criteria that such approximations could be expected to satisfy. I also outline a general variational principle for constructing consistent cluster approximations of arbitrary order with minimal bias, and show that the one-dimensional construction indeed satisfies this principle. Finally, I apply this variational principle to derive a novel consistent expression for symmetric three cell cluster frequencies as estimated from pair frequencies, and use this expression to construct a quantitatively improved pair approximation of the well-known lattice contact process on a hexagonal lattice.
Resumo:
A precision measurement of the top quark mass m_t is obtained using a sample of ttbar events from ppbar collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m_t and a parameter DJES to calibrate the jet energy scale /in situ/. Using a total of 1087 events, a value of m_t = 173.0 +/- 1.2 GeV/c^2 is measured.
Resumo:
We report a measurement of the top quark mass, m_t, obtained from ppbar collisions at sqrt(s) = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. We analyze a sample corresponding to an integrated luminosity of 1.9 fb^-1. We select events with an electron or muon, large missing transverse energy, and exactly four high-energy jets in the central region of the detector, at least one of which is tagged as coming from a b quark. We calculate a signal likelihood using a matrix element integration method, with effective propagators to take into account assumptions on event kinematics. Our event likelihood is a function of m_t and a parameter JES that determines /in situ/ the calibration of the jet energies. We use a neural network discriminant to distinguish signal from background events. We also apply a cut on the peak value of each event likelihood curve to reduce the contribution of background and badly reconstructed events. Using the 318 events that pass all selection criteria, we find m_t = 172.7 +/- 1.8 (stat. + JES) +/- 1.2 (syst.) GeV/c^2.
Resumo:
We present a measurement of the top quark mass with t-tbar dilepton events produced in p-pbar collisions at the Fermilab Tevatron $\sqrt{s}$=1.96 TeV and collected by the CDF II detector. A sample of 328 events with a charged electron or muon and an isolated track, corresponding to an integrated luminosity of 2.9 fb$^{-1}$, are selected as t-tbar candidates. To account for the unconstrained event kinematics, we scan over the phase space of the azimuthal angles ($\phi_{\nu_1},\phi_{\nu_2}$) of neutrinos and reconstruct the top quark mass for each $\phi_{\nu_1},\phi_{\nu_2}$ pair by minimizing a $\chi^2$ function in the t-tbar dilepton hypothesis. We assign $\chi^2$-dependent weights to the solutions in order to build a preferred mass for each event. Preferred mass distributions (templates) are built from simulated t-tbar and background events, and parameterized in order to provide continuous probability density functions. A likelihood fit to the mass distribution in data as a weighted sum of signal and background probability density functions gives a top quark mass of $165.5^{+{3.4}}_{-{3.3}}$(stat.)$\pm 3.1$(syst.) GeV/$c^2$.
New Method for Delexicalization and its Application to Prosodic Tagging for Text-to-Speech Synthesis
Resumo:
This paper describes a new flexible delexicalization method based on glottal excited parametric speech synthesis scheme. The system utilizes inverse filtered glottal flow and all-pole modelling of the vocal tract. The method provides a possibil- ity to retain and manipulate all relevant prosodic features of any kind of speech. Most importantly, the features include voice quality, which has not been properly modeled in earlier delex- icalization methods. The functionality of the new method was tested in a prosodic tagging experiment aimed at providing word prominence data for a text-to-speech synthesis system. The ex- periment confirmed the usefulness of the method and further corroborated earlier evidence that linguistic factors influence the perception of prosodic prominence.
Resumo:
In this study we explore the concurrent, combined use of three research methods, statistical corpus analysis and two psycholinguistic experiments (a forced-choice and an acceptability rating task), using verbal synonymy in Finnish as a case in point. In addition to supporting conclusions from earlier studies concerning the relationships between corpus-based and ex- perimental data (e. g., Featherston 2005), we show that each method adds to our understanding of the studied phenomenon, in a way which could not be achieved through any single method by itself. Most importantly, whereas relative rareness in a corpus is associated with dispreference in selection, such infrequency does not categorically always entail substantially lower acceptability. Furthermore, we show that forced-choice and acceptability rating tasks pertain to distinct linguistic processes, with category-wise in- commensurable scales of measurement, and should therefore be merged with caution, if at all.
Resumo:
When authors of scholarly articles decide where to submit their manuscripts for peer review and eventual publication, they often base their choice of journals on very incomplete information abouthow well the journals serve the authors’ purposes of informing about their research and advancing their academic careers. The purpose of this study was to develop and test a new method for benchmarking scientific journals, providing more information to prospective authors. The method estimates a number of journal parameters, including readership, scientific prestige, time from submission to publication, acceptance rate and service provided by the journal during the review and publication process. Data directly obtainable from the web, data that can be calculated from such data, data obtained from publishers and editors, and data obtained using surveys with authors are used in the method, which has been tested on three different sets of journals, each from a different discipline. We found a number of problems with the different data acquisition methods, which limit the extent to which the method can be used. Publishers and editors are reluctant to disclose important information they have at hand (i.e. journal circulation, web downloads, acceptance rate). The calculation of some important parameters (for instance average time from submission to publication, regional spread of authorship) can be done but requires quite a lot of work. It can be difficult to get reasonable response rates to surveys with authors. All in all we believe that the method we propose, taking a “service to authors” perspective as a basis for benchmarking scientific journals, is useful and can provide information that is valuable to prospective authors in selected scientific disciplines.