968 resultados para High-precision Radiocarbon Dating
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
The electroweak theory is the part of the standard model of particle physics that describes the weak and electromagnetic interactions between elementary particles. Since its formulation almost 40 years ago, it has been experimentally verified to a high accuracy and today it has a status as one of the cornerstones of particle physics. Thermodynamics of electroweak physics has been studied ever since the theory was written down and the features the theory exhibits at extreme conditions remain an interesting research topic even today. In this thesis, we consider some aspects of electroweak thermodynamics. Specifically, we compute the pressure of the standard model to high precision and study the structure of the electroweak phase diagram when finite chemical potentials for all the conserved particle numbers in the theory are introduced. In the first part of the thesis, the theory, methods and essential results from the computations are introduced. The original research publications are reprinted at the end.
Resumo:
A hybrid computer for structure factor calculations in X-ray crystallography is described. The computer can calculate three-dimensional structure factors of up to 24 atoms in a single run and can generate the scatter functions of well over 100 atoms using Vand et al., or Forsyth and Wells approximations. The computer is essentially a digital computer with analog function generators, thus combining to advantage the economic data storage of digital systems and simple computing circuitry of analog systems. The digital part serially selects the data, computes and feeds the arguments into specially developed high precision digital-analog function generators, the outputs of which being d.c. voltages, are further processed by analog circuits and finally the sequential adder, which employs a novel digital voltmeter circuit, converts them back into digital form and accumulates them in a dekatron counter which displays the final result. The computer is also capable of carrying out 1-, 2-, or 3-dimensional Fourier summation, although in this case, the lack of sufficient storage space for the large number of coefficients involved, is a serious limitation at present.
Resumo:
The International Large Detector (ILD) is a concept for a detector at the International Linear Collider, ILC. The ILC will collide electrons and positrons at energies of initially 500 GeV, upgradeable to 1 TeV. The ILC has an ambitious physics program, which will extend and complement that of the Large Hadron Collider (LHC). A hallmark of physics at the ILC is precision. The clean initial state and the comparatively benign environment of a lepton collider are ideally suited to high precision measurements. To take full advantage of the physics potential of ILC places great demands on the detector performance. The design of ILD is driven by these requirements. Excellent calorimetry and tracking are combined to obtain the best possible overall event reconstruction, including the capability to reconstruct individual particles within jets for particle ow calorimetry. This requires excellent spatial resolution for all detector systems. A highly granular calorimeter system is combined with a central tracker which stresses redundancy and efficiency. In addition, efficient reconstruction of secondary vertices and excellent momentum resolution for charged particles are essential for an ILC detector. The interaction region of the ILC is designed to host two detectors, which can be moved into the beam position with a push-pull scheme. The mechanical design of ILD and the overall integration of subdetectors takes these operational conditions into account.
Resumo:
By observing mergers of compact objects, future gravity wave experiments would measure the luminosity distance to a large number of sources to a high precision but not their redshifts. Given the directional sensitivity of an experiment, a fraction of such sources (gold plated) can be identified optically as single objects in the direction of the source. We show that if an approximate distance-redshift relation is known then it is possible to statistically resolve those sources that have multiple galaxies in the beam. We study the feasibility of using gold plated sources to iteratively resolve the unresolved sources, obtain the self-calibrated best possible distance-redshift relation and provide an analytical expression for the accuracy achievable. We derive the lower limit on the total number of sources that is needed to achieve this accuracy through self-calibration. We show that this limit depends exponentially on the beam width and give estimates for various experimental parameters representative of future gravitational wave experiments DECIGO and BBO.
Resumo:
The standard Gibbs energies of formation of RuO2 and OsO2 at high temperature have been determined with high precision, using a novel apparatus that incorporates a buffer electrode between the reference and working electrodes, The buffer electrode absorbs the electrochemical flux of oxygen through the solid electrolyte from the electrode with higher oxygen chemical potential to the electrode with lower oxygen potential, The buffer electrode prevents polarization of the measuring electrode and ensures accurate data, The standard Gibbs energies of formation (Delta(f)G degrees) of RuO2, in the temperature range of 900-1500 K, and OsO2, in the range of 900-1200 K, can be represented by the equations Delta(f)G degrees(RuO2)(J/mol) = -324 720 + 354.21T - 23.490T In T Delta(f)G degrees(OsO2)(J/mol) = -304 740 + 318.80T - 18.444T In T where the temperature T is given in Kelvin and the deviation of the measurement is +/- 80 J/mol, The high-temperature heat ;capacities of RuO2 and OsO2 are measured using differential scanning calorimetry. The information for both the low- and high-temperature heat rapacity of RuO2 is coupled with the Delta(f)G degrees data obtained in this study to evaluate the standard enthalpy of formation of RuO2 at 298.15 K (Delta(f)H degrees(298.15K)). The low-temperature heat capacity of OsO2 has not been measured: therefore, the standard enthalpy and entropy of formation of OsO2 at 298.15 K (Delta(f)H degrees(298.15K) and S degrees(298.15K), respectively) are derived simultaneously through an optimization procedure from the high-temperature heat capacity and the Gibbs energy of formation. Both Delta fH degrees(298.15K) and S degrees(298.15K) are treated as variables in the optimization routine, For RuO2, the standard enthalpy of formation at 298.15 K is Delta fH degrees(298.15K) (RuO2) -313.52 +/- 0.08 kJ/mol, and that for OsO2 is Delta(f)H degrees(298.15K) (OSO2) = -295.96 +/- 0.08 kJ/mol. The standard entropy of OsO2 at 298.15 K that has been obtained from the optimization is given as S degrees(298.15K) (OsO2) = 49.8 +/- 0.2 J (mol K)(-1).
Resumo:
High-precision measurement of the electrical resistance of nickel along its critical line, a first attempt of this kind, as a function of pressure to 47.5 kbar is reported. Our analysis yields the values of the critical exponents α=α’=-0.115±0.005 and the amplitude ratios ‖A/A’‖=1.17±0.07 and ‖D/D’‖=1.2±0.1. These values are in close agreement with those predicted by renormalization-group (RG) theory. Moreover, this investigation provides an unambiguous experimental verification to one of the key consequences of RG theory that the critical exponents and amplitudes ratios are insensitive to pressure variation in nickel, a Heisenberg ferromagnet.
Resumo:
A thermodynamic study of the Ti-O system at 1573 K has been conducted using a combination of thermogravimetric and emf techniques. The results indicate that the variation of oxygen potential with the nonstoichiometric parameter delta in stability domain of TiO2-delta with rutile structure can be represented by the relation, Delta mu o(2) = -6RT In delta - 711970(+/-1600) J/mol. The corresponding relation between non-stoichiometric parameter delta and partial pressure of oxygen across the whole stability range of TiO2-delta at 1573 K is delta proportional to P-O2(-1/6). It is therefore evident that the oxygen deficient behavior of nonstoichiometric TiO2-delta is dominated by the presence of doubly charged oxygen vacancies and free electrons. The high-precision measurements enabled the resolution of oxygen potential steps corresponding to the different Magneli phases (Ti-n O2n-1) up to n = 15. Beyond this value of n, the oxygen potential steps were too small to be resolved. Based on composition of the Magneli phase in equilibrium with TiO2-delta, the maximum value of n is estimated to be 28. The chemical potential of titanium was derived as a function of composition using the Gibbs-Duhem relation. Gibbs energies of formation of the Magneli phases were derived from the chemical potentials of oxygen and titanium. The values of -2441.8(+/-5.8) kJ/mol for Ti4O7 and -1775.4(+/-4.3) kJ/mol for Ti3O5 Obtained in this study refine values of -2436.2(+/-26.1) kJ/mol and-1771.3(+/-6.9) kJ/mol, respectively, given in the JANAF thermochemical tables.
Resumo:
The enzymes of the family of tRNA synthetases perform their functions with high precision by synchronously recognizing the anticodon region and the aminoacylation region, which are separated by ?70 in space. This precision in function is brought about by establishing good communication paths between the two regions. We have modeled the structure of the complex consisting of Escherichia coli methionyl-tRNA synthetase (MetRS), tRNA, and the activated methionine. Molecular dynamics simulations have been performed on the modeled structure to obtain the equilibrated structure of the complex and the cross-correlations between the residues in MetRS have been evaluated. Furthermore, the network analysis on these simulated structures has been carried out to elucidate the paths of communication between the activation site and the anticodon recognition site. This study has provided the detailed paths of communication, which are consistent with experimental results. Similar studies also have been carried out on the complexes (MetRS + activated methonine) and (MetRS + tRNA) along with ligand-free native enzyme. A comparison of the paths derived from the four simulations clearly has shown that the communication path is strongly correlated and unique to the enzyme complex, which is bound to both the tRNA and the activated methionine. The details of the method of our investigation and the biological implications of the results are presented in this article. The method developed here also could be used to investigate any protein system where the function takes place through long-distance communication.
Resumo:
Low interlaminar strength and the consequent possibility of interlaminar failures in composite laminates demand an examination of interlaminar stresses and/or strains to ensure their satisfactory performance. As a first approximation, these stresses can be obtained from thickness-wise integration of ply equilibrium equations using in-plane stresses from the classical laminated plate theory. Implementation of this approach in the finite element form requires evaluation of third and fourth order derivatives of the displacement functions in an element. Hence, a high precision element developed by Jayachandrabose and Kirkhope (1985) is used here and the required derivatives are obtained in two ways. (i) from direct differentiation of element shape functions; and (ii) by adapting a finite difference technique applied to the nodal strains and curvatures obtained from the finite element analysis. Numerical results obtained for a three-layered symmetric and a two-layered asymmetric laminate show that the second scheme is quite effective compared to the first scheme particularly for the case of asymmetric laminates.
Resumo:
The similar to 2500 km long Himalayan arc has experienced three large to great earthquakes of M-w 7.8 to 8.4 during the past century, but none produced surface rupture. Paleoseismic studies have been conducted during the last decade to begin understanding the timing, size, rupture extent, return period, and mechanics of the faulting associated with the occurrence of large surface rupturing earthquakes along the similar to 2500 km long Himalayan Frontal Thrust (HFT) system of India and Nepal. The previous studies have been limited to about nine sites along the western two-thirds of the HFT extending through northwest India and along the southern border of Nepal. We present here the results of paleoseismic investigations at three additional sites further to the northeast along the HFT within the Indian states of West Bengal and Assam. The three sites reside between the meizoseismal areas of the 1934 Bihar-Nepal and 1950 Assam earthquakes. The two westernmost of the sites, near the village of Chalsa and near the Nameri Tiger Preserve, show that offsets during the last surface rupture event were at minimum of about 14 m and 12 m, respectively. Limits on the ages of surface rupture at Chalsa (site A) and Nameri (site B), though broad, allow the possibility that the two sites record the same great historical rupture reported in Nepal around A.D. 1100. The correlation between the two sites is supported by the observation that the large displacements as recorded at Chalsa and Nameri would most likely be associated with rupture lengths of hundreds of kilometers or more and are on the same order as reported for a surface rupture earthquake reported in Nepal around A.D. 1100. Assuming the offsets observed at Chalsa and Nameri occurred synchronously with reported offsets in Nepal, the rupture length of the event would approach 700 to 800 km. The easternmost site is located within Harmutty Tea Estate (site C) at the edges of the 1950 Assam earthquake meizoseismal area. Here the most recent event offset is relatively much smaller (<2.5 m), and radiocarbon dating shows it to have occurred after A.D. 1100 (after about A.D. 1270). The location of the site near the edge of the meizoseismal region of the 1950 Assam earthquake and the relatively lesser offset allows speculation that the displacement records the 1950 M-w 8.4 Assam earthquake. Scatter in radiocarbon ages on detrital charcoal has not resulted in a firm bracket on the timing of events observed in the trenches. Nonetheless, the observations collected here, when taken together, suggest that the largest of thrust earthquakes along the Himalayan arc have rupture lengths and displacements of similar scale to the largest that have occurred historically along the world's subduction zones.
Resumo:
The physics potential of e(+) e(-) linear colliders is summarized in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosons and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, i.e. compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders lip to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e(+) e(-) linear colliders and the high precision with which the properties of particles and their interactions can be analyzed, define an exciting physics program complementary to hadron machines. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
We demonstrate a technique for precisely measuring hyperfine intervals in alkali atoms. The atoms form a three-level system in the presence of a strong control laser and a weak probe laser. The dressed states created by the control laser show significant linewidth reduction. We have developed a technique for Doppler-free spectroscopy that enables the separation between the dressed states to be measured with high accuracy even in room temperature atoms. The states go through an avoided crossing as the detuning of the control laser is changed from positive to negative. By studying the separation as a function of detuning, the center of the level-crossing diagram is determined with high precision, which yields the hyperfine interval. Using room temperature Rb vapor, we obtain a precision of 44 kHz. This is a significant improvement over the current precision of similar to1 MHz.
Resumo:
Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The standard Gibbs energy of formation of Rh203 at high temperature has been determined recently with high precision. The new data are significantly different from those given in thermodynamic compilations.Accurate values for enthalpy and entropy of formation at 298.15 K could not be evaluated from the new data,because reliable values for heat capacity of Rh2O3 were not available. In this article, a new measurement of the high temperature heat capacity of Rh2O3 using differential scanning calorimetry (DSC) is presented.The new values for heat capacity also differ significantly from those given in compilations. The information on heat capacity is coupled with standard Gibbs energy of formation to evaluate values for standard enthalpy and entropy of formation at 289.15 K using a multivariate analysis. The results suggest a major revision in thermodynamic data for Rh2O3. For example, it is recommended that the standard entropy of Rh203 at 298.15 K be changed from 106.27 J mol-' K-'given in the compilations of Barin and Knacke et al. to 75.69 J mol-' K". The recommended revision in the standard enthalpy of formation is from -355.64 kJ mol-'to -405.53 kJ mol".