978 resultados para Didactic laboratory of physics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This three-phase design research describes the modelling processes for DC-circuit phenomena. The first phase presents an analysis of the development of the DC-circuit historical models in the context of constructing Volta s pile at the turn of the 18th century. The second phase involves the designing of a teaching experiment for comprehensive school third graders. Among other considerations, the design work utilises the results of the first phase and research literature of pupils mental models for DC-circuit phenomena. The third phase of the research was concerned with the realisation of the planned teaching experiment. The aim of this phase was to study the development of the external representations of DC-circuit phenomena in a small group of third graders. The aim of the study has been to search for new ways to guide pupils to learn DC-circuit phenomena while emphasing understanding at the qualitative level. Thus, electricity, which has been perceived as a difficult and abstract subject, could be learnt more comprehensively. Especially, the research of younger pupils learning of electricity concepts has not been of great interest at the international level, although DC-circuit phenomena are also taught in the lower classes of comprehensive schools. The results of this study are important, because there has tended to be more teaching of natural sciences in the lower classes of comprehensive schools, and attempts are being made to develop this trend in Finland. In the theoretical part of the research an Experimental-centred representation approach, which emphasises the role of experimentalism in the development of pupil s representations, is created. According to this approach learning at the qualitative level consists of empirical operations like experimenting, observations, perception, and prequantification of nature phenomena, and modelling operations like explaining and reasoning. Besides planning teaching, the new approach can be used as an analysis tool in describing both historical modelling and the development of pupils representations. In the first phase of the study, the research question was: How did the historical models of DC-circuit phenomena develop in Volta s time? The analysis uncovered three qualitative historical models associated with the historical concept formation process. The models include conceptions of the electric circuit as a scene in the DC-circuit phenomena, the comparative electric-current phenomenon as a cause of different observable effect phenomena, and the strength of the battery as a cause of the electric-current phenomenon. These models describe the concept formation process and its phases in Volta s time. The models are portrayed in the analysis using fragments of the models, where observation-based fragments and theoretical fragements are distinguished from each other. The results emphasise the significance of the qualitative concept formation and the meaning of language in the historical modelling of DC-circuit phenomena. For this reason these viewpoints are stressed in planning the teaching experiment in the second phase of the research. In addition, the design process utilised the experimentation behind the historical models of DC-circuit phenomena In the third phase of the study the research question is as follows: How will the small group s external representations of DC-circuit phenomena develop during the teaching experiment? The main question is divided into the following two sub questions: What kind of talk exists in the small group s learning? What kinds of external representations for DC-circuit phenomena exist in the small group discourse during the teaching experiment? The analysis revealed that the teaching experiment of the small group succeeded in its aim to activate talk in the small group. The designed connection cards proved especially successful in activating talk. The connection cards are cards that represent the components of the electric circuit. In the teaching experiment the pupils constructed different connections with the connection cards and discussed, what kinds of DC-circuit phenomena would take place in the corresponding real connections. The talk of the small group was analysed by comparing two situations, firstly, when the small group discussed using connections made with the connection cards and secondly with the same connections using real components. According to the results the talk of the small group included more higher-order thinking when using the connection cards than with similar real components. In order to answer the second sub question concerning the small group s external representations that appeared in the talk during the teaching experiment; student talk was visualised by the fragment maps which incorporate the electric circuit, the electric current and the source voltage. The fragment maps represent the gradual development of the external representations of DC-circuit phenomena in the small group during the teaching experiment. The results of the study challenge the results of previous research into the abstractness and difficulty of electricity concepts. According to this research, the external representations of DC-circuit phenomena clearly developed in the small group of third graders. Furthermore, the fragment maps uncover that although the theoretical explanations of DC-circuit phenomena, which have been obtained as results of typical mental model studies, remain undeveloped, learning at the qualitative level of understanding does take place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for mutual recognition of accurate measurement results made by competent laboratories has been very widely accepted at the international level e.g., at the World Trade Organization. A partial solution to the problem was made by the International Committee for Weights and Measures (CIPM) in setting up the Mutual Recognition Arrangement (CIPM MRA), which was signed by National Metrology Institutes (NMI) around the world. The core idea of the CIPM MRA is to have global arrangements for the mutual acceptance of the calibration certificates of National Metrology Institutes. The CIPM MRA covers all the fields of science and technology for which NMIs have their national standards. The infrastructure for the metrology of the gaseous compounds carbon monoxide (CO), nitrogen monoxide (NO), nitrogen dioxide (NO2), sulphur dioxide (SO2) and ozone (O3) has been constructed at the national level at the Finnish Meteorological Institute (FMI). The calibration laboratory at the FMI was constructed for providing calibration services for air quality measurements and to fulfil the requirements of a metrology laboratory. The laboratory successfully participated, with good results, in the first comparison project, which was aimed at defining the state of the art in the preparation and analysis of the gas standards used by European metrology institutes and calibration laboratories in the field of air quality. To confirm the competence of the laboratory, the international external surveillance study was conducted at the laboratory. Based on the evidence, the Centre for Metrology and Accreditation (MIKES) designated the calibration laboratory at the Finnish Meteorological Institute (FMI) as a National Standard Laboratory in the field of air quality. With this designation, the MIKES-FMI Standards Laboratory became a member of CIPM MRA, and Finland was brought into the internationally-accepted forum in the field of gas metrology. The concept of ‘once measured - everywhere accepted’ is the leading theme of the CIPM MRA. The calibration service of the MIKES-FMI Standards Laboratory realizes the SI traceability system for the gas components, and is constructed to enable it to meet the requirements of the European air quality directives. In addition, all the relevant uncertainty sources that influence the measurement results have been evaluated, and the uncertainty budgets for the measurement results have been created.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a measurement of the top quark mass and of the top-antitop pair production cross section using p-pbar data collected with the CDFII detector at the Tevatron Collider at the Fermi National Accelerator Laboratory and corresponding to an integrated luminosity of 2.9 fb-1. We select events with six or more jets satisfying a number of kinematical requirements imposed by means of a neural network algorithm. At least one of these jets must originate from a b quark, as identified by the reconstruction of a secondary vertex inside the jet. The mass measurement is based on a likelihood fit incorporating reconstructed mass distributions representative of signal and background, where the absolute jet energy scale (JES) is measured simultaneously with the top quark mass. The measurement yields a value of 174.8 +- 2.4(stat+JES) ^{+1.2}_{-1.0}(syst) GeV/c^2, where the uncertainty from the absolute jet energy scale is evaluated together with the statistical uncertainty. The procedure measures also the amount of signal from which we derive a cross section, sigma_{ttbar} = 7.2 +- 0.5(stat) +- 1.0 (syst) +- 0.4 (lum) pb, for the measured values of top quark mass and JES.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the effects of new physics scenarios containing a high mass vector resonance on top pair production at the LHC, using the polarization of the produced top. In particular we use kinematic distributions of the secondary lepton coming from top decay, which depends on top polarization, as it has been shown that the angular distribution of the decay lepton is insensitive to the anomalous tbW vertex and hence is a pure probe of new physics in top quark production. Spin sensitive variables involving the decay lepton are used to probe top polarization. Some sensitivity is found for the new couplings of the top.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is connected with an education development project for the four-year-long officer education program at the National Defence University. In this curriculum physics was studied in two alternative course plans namely scientific and general. Observations connected to the later one e.g. student feedback and learning outcome gave indications that action was needed to support the course. The reform work was focused on the production of aligned course related instructional material. The learning material project produced a customized textbook set for the students of the general basic physics course. The research adapts phases that are typical in Design Based Research (DBR). The research analyses the feature requirements for physics textbook aimed at a specific sector and frames supporting instructional material development, and summarizes the experiences gained in the learning material project when the selected frames have been applied. The quality of instructional material is an essential part of qualified teaching. The goal of instructional material customization is to increase the product's customer centric nature and to enhance its function as a support media for the learning process. Textbooks are still one of the core elements in physics teaching. The idea of a textbook will remain but the form and appearance may change according to the prevailing technology. The work deals with substance connected frames (demands of a physics textbook according to the PER-viewpoint, quality thinking in educational material development), frames of university pedagogy and instructional material production processes. A wide knowledge and understanding of different frames are useful in development work, if they are to be utilized to aid inspiration without limiting new reasoning and new kinds of models. Applying customization even in the frame utilization supports creative and situation aware design and diminishes the gap between theory and practice. Generally, physics teachers produce their own supplementary instructional material. Even though customization thinking is not unknown the threshold to produce an entire textbook might be high. Even though the observations here are from the general physics course at the NDU, the research gives tools also for development in other discipline related educational contexts. This research is an example of an instructional material development work together the questions it uncovers, and presents thoughts when textbook customization is rewarding. At the same time, the research aims to further creative customization thinking in instruction and development. Key words: Physics textbook, PER (Physics Education Research), Instructional quality, Customization, Creativity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerosol particles play an important role in the Earth s atmosphere and in the climate system: they scatter and absorb solar radiation, facilitate chemical processes, and serve as seeds for cloud formation. Secondary new particle formation (NPF) is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapors participating in this process are, however, not truly understood. In order to fully explain atmospheric NPF and subsequent growth, we need to measure directly the very initial steps of the formation processes. This thesis investigates the possibility to study atmospheric particle formation using a recently developed Neutral cluster and Air Ion Spectrometer (NAIS). First, the NAIS was calibrated and intercompared, and found to be in good agreement with the reference instruments both in the laboratory and in the field. It was concluded that NAIS can be reliably used to measure small atmospheric ions and particles directly at the sizes where NPF begins. Second, several NAIS systems were deployed simultaneously at 12 European measurement sites to quantify the spatial and temporal distribution of particle formation events. The sites represented a variety of geographical and atmospheric conditions. The NPF events were detected using NAIS systems at all of the sites during the year-long measurement period. Various particle formation characteristics, such as formation and growth rates, were used as indicators of the relevant processes and participating compounds in the initial formation. In a case of parallel ion and neutral cluster measurements, we also estimated the relative contribution of ion-induced and neutral nucleation to the total particle formation. At most sites, the particle growth rate increased with the increasing particle size indicating that different condensing vapors are participating in the growth of different-sized particles. The results suggest that, in addition to sulfuric acid, organic vapors contribute to the initial steps of NPF and to the subsequent growth, not just later steps of the particle growth. As a significant new result, we found out that the total particle formation rate varied much more between the different sites than the formation rate of charged particles. The results infer that the ion-induced nucleation has a minor contribution to particle formation in the boundary layer in most of the environments. These results give tools to better quantify the aerosol source provided by secondary NPF in various environments. The particle formation characteristics determined in this thesis can be used in global models to assess NPF s climatic effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerosol particles deteriorate air quality, atmospheric visibility and our health. They affect the Earth s climate by absorbing and scattering sunlight, forming clouds, and also via several feed-back mechanisms. The net effect on the radiative balance is negative, i.e. cooling, which means that particles counteract the effect of greenhouse gases. However, particles are one of the poorly known pieces in the climate puzzle. Some of the airborne particles are natural, some anthropogenic; some enter the atmosphere in particle form, while others form by gas-to-particle conversion. Unless the sources and dynamical processes shaping the particle population are quantified, they cannot be incorporated into climate models. The molecular level understanding of new particle formation is still inadequate, mainly due to the lack of suitable measurement techniques to detect the smallest particles and their precursors. This thesis has contributed to our ability to measure newly formed particles. Three new condensation particle counter applications for measuring the concentration of nano-particles were developed. The suitability of the methods for detecting both charged and electrically neutral particles and molecular clusters as small as 1 nm in diameter was thoroughly tested both in laboratory and field conditions. It was shown that condensation particle counting has reached the size scale of individual molecules, and besides measuring the concentration they can be used for getting size information. In addition to atmospheric research, the particle counters could have various applications in other fields, especially in nanotechnology. Using the new instruments, the first continuous time series of neutral sub-3 nm particle concentrations were measured at two field sites, which represent two different kinds of environments: the boreal forest and the Atlantic coastline, both of which are known to be hot-spots for new particle formation. The contribution of ions to the total concentrations in this size range was estimated, and it could be concluded that the fraction of ions was usually minor, especially in boreal forest conditions. Since the ionization rate is connected to the amount of cosmic rays entering the atmosphere, the relative contribution of neutral to charged nucleation mechanisms extends beyond academic interest, and links the research directly to current climate debate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the effects of new physics scenarios containing a high mass vector resonance on top pair production at the LHC, using the polarization of the produced top. In particular we use kinematic distributions of the secondary lepton coming from top decay, which depends on top polarization, as it has been shown that the angular distribution of the decay lepton is insensitive to the anomalous tbW vertex and hence is a pure probe of new physics in top quark production. Spin sensitive variables involving the decay lepton are used to reconstruct the top polarization. Some sensitivity is found for the new couplings of the top.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report large quadratic nonlinearity in a series of 1:1 molecular complexes between methyl substituted benzene donors and quinone acceptors in solution. The first hyperpolarizability, beta(HRS), which is very small for the individual components, becomes large by intermolecular charge transfer (CT) interaction between the donor and the acceptor in the complex. In addition, we have investigated the geometry of these CT complexes in solution using polarization resolved hyper-Rayleigh scattering (HRS). Using linearly (electric field vector along X direction) and circularly polarized incident light, respectively, we have measured two macroscopic depolarization ratios D = I-2 omega,I-X,I-X/I-2 omega,I-Z,I-X and D' = I-2 omega,I-X,I-C/I-2 omega,I-Z,I-C in the laboratory fixed XYZ frame by detecting the second harmonic scattered light in a polarization resolved fashion. The experimentally obtained first hyperpolarizability, beta(HRS), and the value of macroscopic depolarization ratios, D and D', are then matched with the theoretically deduced values from single and double configuration interaction calculations performed using the Zerner's intermediate neglect of differential overlap self-consistent reaction field technique. In solution, since several geometries are possible, we have carried out calculations by rotating the acceptor moiety around three different axes keeping the donor molecule fixed at an optimized geometry. These rotations give us the theoretical beta(HRS), D and D' values as a function of the geometry of the complex. The calculated beta(HRS), D, and D' values that closely match with the experimental values, give the dominant equilibrium geometry in solution. All the CT complexes between methyl benzenes and chloranil or 1,2-dichloro-4,5-dicyano-p-benzoquinone investigated here are found to have a slipped parallel stacking of the donors and the acceptors. Furthermore, the geometries are staggered and in some pairs, a twist angle as high as 30 degrees is observed. Thus, we have demonstrated in this paper that the polarization resolved HRS technique along with theoretical calculations can unravel the geometry of CT complexes in solution. (C) 2011 American Institute of Physics. doi:10.1063/1.3514922]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the report of the B physics working group of the Workshop on High Energy Physics Phenomenology (WHEPP-XI), held at the Physical Research Laboratory, Ahmedabad, in January 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in nonsilica fiber technology have prompted the development of suitable materials for devices operating beyond 1.55 mu m. The III-V ternaries and quaternaries (AlGaIn)(AsSb) lattice matched to GaSb seem to be the obvious choice and have turned out to be promising candidates for high speed electronic and long wavelength photonic devices. Consequently, there has been tremendous upthrust in research activities of GaSb-based systems. As a matter of fact, this compound has proved to be an interesting material for both basic and applied research. At present, GaSb technology is in its infancy and considerable research has to be carried out before it can be employed for large scale device fabrication. This article presents an up to date comprehensive account of research carried out hitherto. It explores in detail the material aspects of GaSb starting from crystal growth in bulk and epitaxial form, post growth material processing to device feasibility. An overview of the lattice, electronic, transport, optical and device related properties is presented. Some of the current areas of research and development have been critically reviewed and their significance for both understanding the basic physics as well as for device applications are addressed. These include the role of defects and impurities on the structural, optical and electrical properties of the material, various techniques employed for surface and bulk defect passivation and their effect on the device characteristics, development of novel device structures, etc. Several avenues where further work is required in order to upgrade this III-V compound for optoelectronic devices are listed. It is concluded that the present day knowledge in this material system is sufficient to understand the basic properties and what should be more vigorously pursued is their implementation for device fabrication. (C) 1997 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unique features of a macromolecule and water as a solvent make the issue of solvation unconventional, with questions about the static versus dynamic nature of hydration and the, physics of orientational and translational diffusion at the boundary. For proteins, the hydration shell that covers the surface is critical to the stability of its structure and function. Dynamically speaking, the residence time of water at the surface is a signature of its mobility and binding. With femtosecond time resolution it is possible to unravel the shortest residence times which are key for the description of the hydration layer, static or dynamic. In this article we review these issues guided by experimental studies, from this laboratory, of polar hydration dynamics at the surfaces of two proteins (Subtilisin Carlsberg (SC) and Monellin). The natural probe tryptophan amino acid was used for the interrogation of the dynamics, and for direct comparison we also studied the behavior in bulk water - a complete hydration in 1 ps. We develop a theoretical description of solvation and relate the theory to the experimental observations. In this - theoretical approach, we consider the dynamical equilibrium in the hydration shell, defining the rate processes for breaking and making the transient hydrogen bonds, and the effective friction in the layer which is defined by the translational and orientational motions of water molecules. The relationship between the residence time of water molecules and the observed slow component in solvation dynamics is a direct one. For the two proteins studied, we observed a "bimodal decay" for the hydration correlation function, with two primary relaxation times: ultrafast, typically 1 ps or less, and longer, typically 15-40 ps, and both are related to the residence time at the protein surface, depending on the binding energies. We end by making extensions to studies of the denatured state of the protein, random coils, and the biomimetic micelles, and conclude with our thoughts on the relevance of the dynamics of native structures to their functions.