13 resultados para mixture distribution
em CaltechTHESIS
Resumo:
Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.
We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.
The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield
ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV
The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.
A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives
ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV
Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
A research program was designed (1) to map regional lithological units of the lunar surface based on measurements of spatial variations in spectral reflectance, and, (2) to establish the sequence of the formation of such lithological units from measurements of the accumulated affects of impacting bodies.
Spectral reflectance data were obtained by scanning luminance variations over the lunar surface at three wavelengths (0.4µ, 0.52µ, and 0.7µ). These luminance measurements were reduced to normalized spectral reflectance values relative to a standard area in More Serenitotis. The spectral type of each lunar area was identified from the shape of its reflectance spectrum. From these data lithological units or regions of constant color were identified. The maria fall into two major spectral classes: circular moria like More Serenitotis contain S-type or red material and thin, irregular, expansive maria like Mare Tranquillitatis contain T-type or blue material. Four distinct subtypes of S-type reflectances and two of T-type reflectances exist. As these six subtypes occur in a number of lunar regions, it is concluded that they represent specific types of material rather than some homologous set of a few end members.
The relative ages or sequence of formation of these more units were established from measurements of the accumulated impacts which have occurred since more formation. A model was developed which relates the integrated flux of particles which hove impacted a surface to the distribution of craters as functions of size and shape. Erosion of craters is caused chiefly by small bodies which produce negligible individual changes in crater shape. Hence the shape of a crater can be used to estimate the total number of small impacts that have occurred since the crater was formed. Relative ages of a surface can then be obtained from measurements of the slopes of the walls of the oldest craters formed on the surface. The results show that different maria and regions within them were emplaced at different times. An approximate absolute time scale was derived from Apollo 11 crystallization ages under an assumption of a constant rote of impacting for the last 4 x 10^9 yrs. Assuming, constant flux, the period of mare formation lasted from over 4 x 10^9 yrs to about 1.5 x 10^9 yrs ago.
A synthesis of the results of relative age measurements and of spectral reflectance mapping shows that (1) the formation of the lunar maria occurred in three stages; material of only one spectral type was deposited in each stage, (2) two distinct kinds of maria exist, each type distinguished by morphology, structure, gravity anomalies, time of formation, and spectral reflectance type, and (3) individual maria have complicated histories; they contain a variety of lithic units emplaced at different times.
Resumo:
Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.
Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.
Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.
The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.
Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.
The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.
Resumo:
Analysis of the data from the Heavy Nuclei Experiment on the HEAO-3 spacecraft has yielded the cosmic ray abundances of odd-even element pairs with atomic number, Z, in the range 33 ≤ Z ≤60, and the abundances of broad element groups in the range 62 ≤ Z ≤83, relative to iron. These data show that the cosmic ray source composition in this charge range is quite similar to that of the solar system provided an allowance is made for a source fractionation based on first ionization potential. The observations are inconsistent with a source composition which is dominated by either r-process or s-process material, whether or not an allowance is made for first ionization potential. Although the observations do not exclude a source containing the same mixture of r- and s-process material as in the solar system. the data are best fit by a source having an r- to s-process ratio of 1.22^(+0.25)_(0.21), relative to the solar system The abundances of secondary elements are consistent with the leaky box model of galactic propagation, implying a pathlength distribution similar to that which explains the abundances of nuclei with Z<29.
The energy spectra of the even elements in the range 38 ≤ Z ≤ 60 are found to have a deficiency of particles in the range ~1.5 to 3 GeV/amu, compared to iron. This deficiency may result from ionization energy loss in the interstellar medium, and is not predicted by propagation models which ignore such losses. ln addition, the energy spectra of secondary elements are found to be different to those of the primary elements. Such effects are consistent with observations of lighter nuclei, and are in qualitative agreement with galactic propagation models using a rigidity dependent escape length. The energy spectra of secondaries arising from the platinum group are found to be much steeper than those of lower Z. This effect may result from energy dependent fragmentation cross sections.
Resumo:
Although numerous theoretical efforts have been put forth, a systematic, unified and predictive theoretical framework that is able to capture all the essential physics of the interfacial behaviors of ions, such as the Hofmeister series effect, Jones-Ray effect and the salt effect on the bubble coalescence remain an outstanding challenge. The most common approach to treating electrostatic interactions in the presence of salt ions is the Poisson-Boltzmann (PB) theory. However, there are many systems for which the PB theory fails to offer even a qualitative explanation of the behavior, especially for ions distributed in the vicinity of an interface with dielectric contrast between the two media (like the water-vapor/oil interface). A key factor missing in the PB theory is the self energy of the ion.
In this thesis, we develop a self-consistent theory that treats the electrostatic self energy (including both the short-range Born solvation energy and the long-range image charge interactions), the nonelectrostatic contribution of the self energy, the ion-ion correlation and the screening effect systematically in a single framework. By assuming a finite charge spread of the ion instead of using the point-charge model, the self energy obtained by our theory is free of the divergence problems and gives a continuous self energy across the interface. This continuous feature allows ions on the water side and the vapor/oil side of the interface to be treated in a unified framework. The theory involves a minimum set of parameters of the ion, such as the valency, radius, polarizability of the ions, and the dielectric constants of the medium, that are both intrinsic and readily available. The general theory is first applied to study the thermodynamic property of the bulk electrolyte solution, which shows good agreement with the experiment result for predicting the activity coefficient and osmotic coefficient.
Next, we address the effect of local Born solvation energy on the bulk thermodynamics and interfacial properties of electrolyte solution mixtures. We show that difference in the solvation energy between the cations and anions naturally gives rise to local charge separation near the interface, and a finite Galvani potential between two coexisting solutions. The miscibility of the mixture can either increases or decreases depending on the competition between the solvation energy and translation entropy of the ions. The interfacial tension shows a non-monotonic dependence on the salt concentration: it increases linearly with the salt concentration at higher concentrations, and decreases approximately as the square root of the salt concentration for dilute solutions, which is in agreement with the Jones-Ray effect observed in experiment.
Next, we investigate the image effects on the double layer structure and interfacial properties near a single charged plate. We show that the image charge repulsion creates a depletion boundary layer that cannot be captured by a regular perturbation approach. The correct weak-coupling theory must include the self-energy of the ion due to the image charge interaction. The image force qualitatively alters the double layer structure and properties, and gives rise to many non-PB effects, such as nonmonotonic dependence of the surface energy on concentration and charge inversion. The image charge effect is then studied for electrolyte solutions between two plates. For two neutral plates, we show that depletion of the salt ions by the image charge repulsion results in short-range attractive and long-range repulsive forces. If cations and anions are of different valency, the asymmetric depletion leads to the formation of an induced electrical double layer. For two charged plates, the competition between the surface charge and the image charge effect can give rise to like- charge attraction.
Then, we study the inhomogeneous screening effect near the dielectric interface due to the anisotropic and nonuniform ion distribution. We show that the double layer structure and interfacial properties is drastically affected by the inhomogeneous screening if the bulk Debye screening length is comparable or smaller than the Bjerrum length. The width of the depletion layer is characterized by the Bjerrum length, independent of the salt concentration. We predict that the negative adsorption of ions at the interface increases linearly with the salt concentration, which cannot be captured by either the bulk screening approximation or the WKB approximation. For asymmetric salt, the inhomogeneous screening enhances the charge separation in the induced double layer and significantly increases the value of the surface potential.
Finally, to account for the ion specificity, we study the self energy of a single ion across the dielectric interface. The ion is considered to be polarizable: its charge distribution can be self-adjusted to the local dielectric environment to minimize the self energy. Using intrinsic parameters of the ions, such as the valency, radius, and polarizability, we predict the specific ion effect on the interfacial affinity of halogen anions at the water/air interface, and the strong adsorption of hydrophobic ions at the water/oil interface, in agreement with experiments and atomistic simulations.
The theory developed in this work represents the most systematic theoretical technique for weak-coupling electrolytes. We expect the theory to be more useful for studying a wide range of structural and dynamic properties in physicochemical, colloidal, soft-matter and biophysical systems.
Resumo:
Stable isotope geochemistry is a valuable toolkit for addressing a broad range of problems in the geosciences. Recent technical advances provide information that was previously unattainable or provide unprecedented precision and accuracy. Two such techniques are site-specific stable isotope mass spectrometry and clumped isotope thermometry. In this thesis, I use site-specific isotope and clumped isotope data to explore natural gas development and carbonate reaction kinetics. In the first chapter, I develop an equilibrium thermodynamics model to calculate equilibrium constants for isotope exchange reactions in small organic molecules. This equilibrium data provides a framework for interpreting the more complex data in the later chapters. In the second chapter, I demonstrate a method for measuring site-specific carbon isotopes in propane using high-resolution gas source mass spectrometry. This method relies on the characteristic fragments created during electron ionization, in which I measure the relative isotopic enrichment of separate parts of the molecule. My technique will be applied to a range of organic compounds in the future. For the third chapter, I use this technique to explore diffusion, mixing, and other natural processes in natural gas basins. As time progresses and the mixture matures, different components like kerogen and oil contribute to the propane in a natural gas sample. Each component imparts a distinct fingerprint on the site-specific isotope distribution within propane that I can observe to understand the source composition and maturation of the basin. Finally, in Chapter Four, I study the reaction kinetics of clumped isotopes in aragonite. Despite its frequent use as a clumped isotope thermometer, the aragonite blocking temperature is not known. Using laboratory heating experiments, I determine that the aragonite clumped isotope thermometer has a blocking temperature of 50-100°C. I compare this result to natural samples from the San Juan Islands that exhibit a maximum clumped isotope temperature that matches this blocking temperature. This thesis presents a framework for measuring site-specific carbon isotopes in organic molecules and new constraints on aragonite reaction kinetics. This study represents the foundation of a future generation of geochemical tools for the study of complex geologic systems.
Resumo:
The equations of motion for the flow of a mixture of liquid droplets, their vapor, and an inert gas through a normal shock wave are derived. A set of equations is obtained which is solved numerically for the equilibrium conditions far downstream of the shock. The equations describing the process of reaching equilibrium are also obtained. This is a set of first-order nonlinear differential equations and must also be solved numerically. The detailed equilibration process is obtained for several cases and the results are discussed.
Resumo:
I. The attenuation of sound due to particles suspended in a gas was first calculated by Sewell and later by Epstein in their classical works on the propagation of sound in a two-phase medium. In their work, and in more recent works which include calculations of sound dispersion, the calculations were made for systems in which there was no mass transfer between the two phases. In the present work, mass transfer between phases is included in the calculations.
The attenuation and dispersion of sound in a two-phase condensing medium are calculated as functions of frequency. The medium in which the sound propagates consists of a gaseous phase, a mixture of inert gas and condensable vapor, which contains condensable liquid droplets. The droplets, which interact with the gaseous phase through the interchange of momentum, energy, and mass (through evaporation and condensation), are treated from the continuum viewpoint. Limiting cases, for flow either frozen or in equilibrium with respect to the various exchange processes, help demonstrate the effects of mass transfer between phases. Included in the calculation is the effect of thermal relaxation within droplets. Pressure relaxation between the two phases is examined, but is not included as a contributing factor because it is of interest only at much higher frequencies than the other relaxation processes. The results for a system typical of sodium droplets in sodium vapor are compared to calculations in which there is no mass exchange between phases. It is found that the maximum attenuation is about 25 per cent greater and occurs at about one-half the frequency for the case which includes mass transfer, and that the dispersion at low frequencies is about 35 per cent greater. Results for different values of latent heat are compared.
II. In the flow of a gas-particle mixture through a nozzle, a normal shock may exist in the diverging section of the nozzle. In Marble’s calculation for a shock in a constant area duct, the shock was described as a usual gas-dynamic shock followed by a relaxation zone in which the gas and particles return to equilibrium. The thickness of this zone, which is the total shock thickness in the gas-particle mixture, is of the order of the relaxation distance for a particle in the gas. In a nozzle, the area may change significantly over this relaxation zone so that the solution for a constant area duct is no longer adequate to describe the flow. In the present work, an asymptotic solution, which accounts for the area change, is obtained for the flow of a gas-particle mixture downstream of the shock in a nozzle, under the assumption of small slip between the particles and gas. This amounts to the assumption that the shock thickness is small compared with the length of the nozzle. The shock solution, valid in the region near the shock, is matched to the well known small-slip solution, which is valid in the flow downstream of the shock, to obtain a composite solution valid for the entire flow region. The solution is applied to a conical nozzle. A discussion of methods of finding the location of a shock in a nozzle is included.
Resumo:
Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.
Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.
The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.
Resumo:
Part I
The mechanism of the hydroformylation reaction was studied. Using cobalt deuterotetracarbonyl and 1-pentene as substrates, the first step in the reaction, addition of cobalt tetracarbonyl to an olefin, was shown to be reversible.
Part II
The role of coenzyme B12 in the isomerization of methylmalonyl coenzyme A to succinyl coenzyme A by methylmalonyl coenzyme A mutase was studied. The reaction was allowed to proceed to partial completion using a mixture of methylmalonyl coenzyme A and 4, 4, 4-tri-2H-methylmalonyl coenzyme A as substrate. The deuterium distribution in the product, succinyl coenzyme A, was shown to best fit a model in which hydrogen is transferred from C-4 of methylmalonyl coenzyme A to C-5’ of the adenosyl moiety of coenzyme B12 in the rate determining step. The three hydrogens at the 5’-adenosyl position of the coenzyme B12 intermediate are then able to become enzymatically equivalent before hydrogen is transferred from the coenzyme B12 intermediate to form succinyl coenzyme A.
Resumo:
Let F = Ǫ(ζ + ζ –1) be the maximal real subfield of the cyclotomic field Ǫ(ζ) where ζ is a primitive qth root of unity and q is an odd rational prime. The numbers u1=-1, uk=(ζk-ζ-k)/(ζ-ζ-1), k=2,…,p, p=(q-1)/2, are units in F and are called the cyclotomic units. In this thesis the sign distribution of the conjugates in F of the cyclotomic units is studied.
Let G(F/Ǫ) denote the Galoi's group of F over Ǫ, and let V denote the units in F. For each σϵ G(F/Ǫ) and μϵV define a mapping sgnσ: V→GF(2) by sgnσ(μ) = 1 iff σ(μ) ˂ 0 and sgnσ(μ) = 0 iff σ(μ) ˃ 0. Let {σ1, ... , σp} be a fixed ordering of G(F/Ǫ). The matrix Mq=(sgnσj(vi) ) , i, j = 1, ... , p is called the matrix of cyclotomic signatures. The rank of this matrix determines the sign distribution of the conjugates of the cyclotomic units. The matrix of cyclotomic signatures is associated with an ideal in the ring GF(2) [x] / (xp+ 1) in such a way that the rank of the matrix equals the GF(2)-dimension of the ideal. It is shown that if p = (q-1)/ 2 is a prime and if 2 is a primitive root mod p, then Mq is non-singular. Also let p be arbitrary, let ℓ be a primitive root mod q and let L = {i | 0 ≤ i ≤ p-1, the least positive residue of defined by ℓi mod q is greater than p}. Let Hq(x) ϵ GF(2)[x] be defined by Hq(x) = g. c. d. ((Σ xi/I ϵ L) (x+1) + 1, xp + 1). It is shown that the rank of Mq equals the difference p - degree Hq(x).
Further results are obtained by using the reciprocity theorem of class field theory. The reciprocity maps for a certain abelian extension of F and for the infinite primes in F are associated with the signs of conjugates. The product formula for the reciprocity maps is used to associate the signs of conjugates with the reciprocity maps at the primes which lie above (2). The case when (2) is a prime in F is studied in detail. Let T denote the group of totally positive units in F. Let U be the group generated by the cyclotomic units. Assume that (2) is a prime in F and that p is odd. Let F(2) denote the completion of F at (2) and let V(2) denote the units in F(2). The following statements are shown to be equivalent. 1) The matrix of cyclotomic signatures is non-singular. 2) U∩T = U2. 3) U∩F2(2) = U2. 4) V(2)/ V(2)2 = ˂v1 V(2)2˃ ʘ…ʘ˂vp V(2)2˃ ʘ ˂3V(2)2˃.
The rank of Mq was computed for 5≤q≤929 and the results appear in tables. On the basis of these results and additional calculations the following conjecture is made: If q and p = (q -1)/ 2 are both primes, then Mq is non-singular.