34 resultados para INVERSE
Resumo:
Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.
Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.
Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.
Resumo:
The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?
We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.
Resumo:
A novel method for gene enrichment has been developed and applied to mapping the rRNA genes of two eucaryotic organisms. The method makes use of antibodies to DNA/RNA hybrids prepared by injecting rabbits with the synthetic hybrid poly(rA)•poly(dT). Antibodies which cross-react with non-hybrid nucleic acids were removed from the purified IgG fraction by adsorption on columns of DNA-Sepharose, oligo(dT)-cellulose, and poly(rA)-Sepharose. Subsequent purification of the specific DNA/RNA hybrid antibody was carried out on a column of oligo(dT)-cellulose to which poly(rA) was hybridized. Attachment of these antibodies to CNBr-activated Sepharose produced an affinity resin which specifically binds DNA/RNA hybrids.
In order to map the rDNA of the slime mold Dictyostelium discoideum, R-loops were formed using unsheared nuclear DNA and the 178 and 268 rRNAs of this organism. This mixture was passed through a column containing the affinity resin, and bound molecules containing R- loops were eluted by high salt. This purified rDN A was observed directly in the electron microscope. Evidence was obtained that there is a physical end to Dictyostelium rDN A molecules approximately 10 kilobase pairs (kbp) from the region which codes for the 268 rRNA. This finding is consistent with reports of other investigators that the rRNA genes exist as inverse repeats on extra-chromosomal molecules of DNA unattached to the remainder of the nuclear DNA in this organism.
The same general procedure was used to map the rRNA genes of the rat. Molecules of DNA which contained R-loops formed with the 188 and 288 rRNAs were enriched approximately 150- fold from total genomal rat DNA by two cycles of purification on the affinity column. Electron microscopic measurements of these molecules enabled the construction of an R-loop map of rat rDNA. Eleven of the observed molecules contained three or four R-loops or else two R-loops separated by a long spacer. These observations indicated that the rat rRNA genes are arranged as tandem repeats. The mean length of the repeating units was 37.2 kbp with a standard deviation of 1.3 kbp. These eleven molecules may represent repeating units of exactly the same length within the errors of the measurements, although a certain degree of length heterogeneity cannot be ruled out. If significantly shorter or longer repeating units exist, they are probably much less common than the 37.2 kbp unit.
The last section of the thesis describes the production of antibodies to non-histone chromosomal proteins which have been exposed to the ionic detergent sodium dodecyl sulfate (SDS). The presence of low concentrations of SDS did not seem to affect either production of antibodies or their general specificity. Also, a technique is described for the in situ immunofluorescent detection of protein antigens in polyacrylamide gels.
Resumo:
Isotope dilution thorium and uranium analyses of the Harleton chondrite show a larger scatter than previously observed in equilibrated ordinary chondrites (EOC). The linear correlation of Th/U with 1/U in Harleton (and all EOC data) is produced by variation in the chlorapatite to merrillite mixing ratio. Apatite variations control the U concentrations. Phosphorus variations are compensated by inverse variations in U to preserve the Th/U vs. 1/U correlation. Because the Th/U variations reflect phosphate ampling, a weighted Th/U average should converge to an improved solar system Th/U. We obtain Th/U=3.53 (1-mean=0.10), significantly lower and more precise than previous estimates.
To test whether apatite also produces Th/U variation in CI and CM chondrites, we performed P analyses on the solutions from leaching experiments of Orgueil and Murchison meteorites.
A linear Th/U vs. 1/U correlation in CI can be explained by redistribution of hexavalent U by aqueous fluids into carbonates and sulfates.
Unlike CI and EOC, whole rock Th/U variations in CMs are mostly due to Th variations. A Th/U vs. 1/U linear correlation suggested by previous data for CMs is not real. We distinguish 4 components responsible for the whole rock Th/U variations: (1) P and actinide-depleted matrix containing small amounts of U-rich carbonate/sulfate phases (similar to CIs); (2) CAIs and (3) chondrules are major reservoirs for actinides, (4) an easily leachable phase of high Th/U. likely carbonate produced by CAI alteration. Phosphates play a minor role as actinide and P carrier phases in CM chondrites.
Using our Th/U and minimum galactic ages from halo globular clusters, we calculate relative supernovae production rates for 232Th/238U and 235U/238U for different models of r-process nucleosynthesis. For uniform galactic production, the beginning of the r-process nucleosynthesis must be less than 13 Gyr. Exponentially decreasing production is also consistent with a 13 Gyr age, but very slow decay times are required (less than 35 Gyr), approaching the uniform production. The 15 Gyr Galaxy requires either a fast initial production growth (infall time constant less than 0.5 Gyr) followed by very low decrease (decay time constant greater than 100 Gyr), or the fastest possible decrease (≈8 Gyr) preceded by slow in fall (≈7.5 Gyr).
Resumo:
Understanding how transcriptional regulatory sequence maps to regulatory function remains a difficult problem in regulatory biology. Given a particular DNA sequence for a bacterial promoter region, we would like to be able to say which transcription factors bind there, how strongly they bind, and whether they interact with each other and/or RNA polymerase, with the ultimate objective of integrating knowledge of these parameters into a prediction of gene expression levels. The theoretical framework of statistical thermodynamics provides a useful framework for doing so, enabling us to predict how gene expression levels depend on transcription factor binding energies and concentrations. We used thermodynamic models, coupled with models of the sequence-dependent binding energies of transcription factors and RNAP, to construct a genotype to phenotype map for the level of repression exhibited by the lac promoter, and tested it experimentally using a set of promoter variants from E. coli strains isolated from different natural environments. For this work, we sought to ``reverse engineer'' naturally occurring promoter sequences to understand how variations in promoter sequence affects gene expression. The natural inverse of this approach is to ``forward engineer'' promoter sequences to obtain targeted levels of gene expression. We used a high precision model of RNAP-DNA sequence dependent binding energy, coupled with a thermodynamic model relating binding energy to gene expression, to predictively design and verify a suite of synthetic E. coli promoters whose expression varied over nearly three orders of magnitude.
However, although thermodynamic models enable predictions of mean levels of gene expression, it has become evident that cell-to-cell variability or ``noise'' in gene expression can also play a biologically important role. In order to address this aspect of gene regulation, we developed models based on the chemical master equation framework and used them to explore the noise properties of a number of common E. coli regulatory motifs; these properties included the dependence of the noise on parameters such as transcription factor binding strength and copy number. We then performed experiments in which these parameters were systematically varied and measured the level of variability using mRNA FISH. The results showed a clear dependence of the noise on these parameters, in accord with model predictions.
Finally, one shortcoming of the preceding modeling frameworks is that their applicability is largely limited to systems that are already well-characterized, such as the lac promoter. Motivated by this fact, we used a high throughput promoter mutagenesis assay called Sort-Seq to explore the completely uncharacterized transcriptional regulatory DNA of the E. coli mechanosensitive channel of large conductance (MscL). We identified several candidate transcription factor binding sites, and work is continuing to identify the associated proteins.
Resumo:
Bulk n-lnSb is investigated at a heterodyne detector for the submillimeter wavelength region. Two modes or operation are investigated: (1) the Rollin or hot electron bolometer mode (zero magnetic field), and (2) the Putley mode (quantizing magnetic field). The highlight of the thesis work is the pioneering demonstration or the Putley mode mixer at several frequencies. For example, a double-sideband system noise temperature of about 510K was obtained using a 812 GHz methanol laser for the local oscillator. This performance is at least a factor or 10 more sensitive than any other performance reported to date at the same frequency. In addition, the Putley mode mixer achieved system noise temperatures of 250K at 492 GHz and 350K at 625 GHz. The 492 GHz performance is about 50% better and the 625 GHz is about 100% better than previous best performances established by the Rollin-mode mixer. To achieve these results, it was necessary to design a totally new ultra-low noise, room-temperature preamp to handle the higher source impedance imposed by the Putley mode operation. This preamp has considerably less input capacitance than comparably noisy, ambient designs.
In addition to advancing receiver technology, this thesis also presents several novel results regarding the physics of n-lnSb at low temperatures. A Fourier transform spectrometer was constructed and used to measure the submillimeter wave absorption coefficient of relatively pure material at liquid helium temperatures and in zero magnetic field. Below 4.2K, the absorption coefficient was found to decrease with frequency much faster than predicted by Drudian theory. Much better agreement with experiment was obtained using a quantum theory based on inverse-Bremmstrahlung in a solid. Also the noise of the Rollin-mode detector at 4.2K was accurately measured and compared with theory. The power spectrum is found to be well fit by a recent theory of non- equilibrium noise due to Mather. Surprisingly, when biased for optimum detector performance, high purity lnSb cooled to liquid helium temperatures generates less noise than that predicted by simple non-equilibrium Johnson noise theory alone. This explains in part the excellent performance of the Rollin-mode detector in the millimeter wavelength region.
Again using the Fourier transform spectrometer, spectra are obtained of the responsivity and direct detection NEP as a function of magnetic field in the range 20-110 cm-1. The results show a discernable peak in the detector response at the conduction electron cyclotron resonance frequency tor magnetic fields as low as 3 KG at bath temperatures of 2.0K. The spectra also display the well-known peak due to the cyclotron resonance of electrons bound to impurity states. The magnitude of responsivity at both peaks is roughly constant with magnet1c field and is comparable to the low frequency Rollin-mode response. The NEP at the peaks is found to be much better than previous values at the same frequency and comparable to the best long wavelength results previously reported. For example, a value NEP=4.5x10-13/Hz1/2 is measured at 4.2K, 6 KG and 40 cm-1. Study of the responsivity under conditions of impact ionization showed a dramatic disappearance of the impurity electron resonance while the conduction electron resonance remained constant. This observation offers the first concrete evidence that the mobility of an electron in the N=0 and N=1 Landau levels is different. Finally, these direct detection experiments indicate that the excellent heterodyne performance achieved at 812 GHz should be attainable up to frequencies of at least 1200 GHz.
Resumo:
This work is concerned with a general analysis of wave interactions in periodic structures and particularly periodic thin film dielectric waveguides.
The electromagnetic wave propagation in an asymmetric dielectric waveguide with a periodically perturbed surface is analyzed in terms of a Floquet mode solution. First order approximate analytical expressions for the space harmonics are obtained. The solution is used to analyze various applications: (1) phase matched second harmonic generation in periodically perturbed optical waveguides; (2) grating couplers and thin film filters; (3) Bragg reflection devices; (4) the calculation of the traveling wave interaction impedance for solid state and vacuum tube optical traveling wave amplifiers which utilize periodic dielectric waveguides. Some of these applications are of interest in the field of integrated optics.
A special emphasis is put on the analysis of traveling wave interaction between electrons and electromagnetic waves in various operation regimes. Interactions with a finite temperature electron beam at the collision-dominated, collisionless, and quantum regimes are analyzed in detail assuming a one-dimensional model and longitudinal coupling.
The analysis is used to examine the possibility of solid state traveling wave devices (amplifiers, modulators), and some monolithic structures of these devices are suggested, designed to operate at the submillimeter-far infrared frequency regime. The estimates of attainable traveling wave interaction gain are quite low (on the order of a few inverse centimeters). However, the possibility of attaining net gain with different materials, structures and operation condition is not ruled out.
The developed model is used to discuss the possibility and the theoretical limitations of high frequency (optical) operation of vacuum electron beam tube; and the relation to other electron-electromagnetic wave interaction effects (Smith-Purcell and Cerenkov radiation and the free electron laser) are pointed out. Finally, the case where the periodic structure is the natural crystal lattice is briefly discussed. The longitudinal component of optical space harmonics in the crystal is calculated and found to be of the order of magnitude of the macroscopic wave, and some comments are made on the possibility of coherent bremsstrahlung and distributed feedback lasers in single crystals.
Resumo:
The influence of composition on the structure and on the electric and magnetic properties of amorphous Pd-Mn-P and Pd-Co-P prepared by rapid quenching techniques were investigated in terms of (1) the 3d band filling of the first transition metal group, (2) the phosphorus concentration effect which acts as an electron donor and (3) the transition metal concentration.
The structure is essentially characterized by a set of polyhedra subunits essentially inverse to the packing of hard spheres in real space. Examination of computer generated distribution functions using Monte Carlo random statistical distribution of these polyhedra entities demonstrated tile reproducibility of the experimentally calculated atomic distribution function. As a result, several possible "structural parameters" are proposed such as: the number of nearest neighbors, the metal-to-metal distance, the degree of short-range order and the affinity between metal-metal and metal-metalloid. It is shown that the degree of disorder increases from Ni to Mn. Similar behavior is observed with increase in the phosphorus concentration.
The magnetic properties of Pd-Co-P alloys show that they are ferromagnetic with a Curie temperature between 272 and 399°K as the cobalt concentration increases from 15 to 50 at.%. Below 20 at.% Co the short-range exchange interactions which produce the ferromagnetism are unable to establish a long-range magnetic order and a peak in the magnetization shows up at the lowest temperature range . The electric resistivity measurements were performed from liquid helium temperatures up to the vicinity of the melting point (900°K). The thermomagnetic analysis was carried out under an applied field of 6.0 kOe. The electrical resistivity of Pd-Co-P shows the coexistence of a Kondo-like minimum with ferromagnetism. The minimum becomes less important as the transition metal concentration increases and the coefficients of ℓn T and T^2 become smaller and strongly temperature dependent. The negative magnetoresistivity is a strong indication of the existence of localized moment.
The temperature coefficient of resistivity which is positive for Pd- Fe-P, Pd-Ni-P, and Pd-Co-P becomes negative for Pd-Mn-P. It is possible to account for the negative temperature dependence by the localized spin fluctuation model and the high density of states at the Fermi energy which becomes maximum between Mn and Cr. The magnetization curves for Pd-Mn-P are typical of those resulting from the interplay of different exchange forces. The established relationship between susceptibility and resistivity confirms the localized spin fluctuation model. The magnetoresistivity of Pd-Mn-P could be interpreted in tenns of a short-range magnetic ordering that could arise from the Rudennan-Kittel type interactions.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
How animals use sensory information to weigh the risks vs. benefits of behavioral decisions remains poorly understood. Inter-male aggression is triggered when animals perceive both the presence of an appetitive resource, such as food or females, and of competing conspecific males. How such signals are detected and integrated to control the decision to fight is not clear. Here we use the vinegar fly, Drosophila melanogaster, to investigate the manner in which food and females promotes aggression.
In the first chapter, we explore how food controls aggression. As in many other species, food promotes aggression in flies, but it is not clear whether food increases aggression per se, or whether aggression is a secondary consequence of increased social interactions caused by aggregation of flies on food. Furthermore, nothing is known about how animals evaluate the quality and quantity of food in the context of competition. We show that food promotes aggression independently of any effect to increase the frequency of contact between males. Food increases aggression but not courtship between males, suggesting that the effect of food on aggression is specific. Next, we show that flies tune the level of aggression according to absolute amount of food rather than other parameters, such as area or concentration of food. Sucrose, a sugar molecule present in many fruits, is sufficient to promote aggression, and detection of sugar via gustatory receptor neurons is necessary for food-promoted aggression. Furthermore, we show that while food is necessary for aggression, too much food decreases aggression. Finally, we show that flies exhibit strategies consistent with a territorial strategy. These data suggest that flies use sweet-sensing gustatory information to guide their decision to fight over a limited quantity of a food resource.
Following up on the findings of the first chapter, we asked how the presence of a conspecific female resource promotes male-male aggression. In the absence of food, group-housed male flies, who normally do not fight even in the presence of food, fight in the presence of females. Unlike food, the presence of females strongly influences proximity between flies. Nevertheless, as group-housed flies do not fight even when they are in small chambers, it is unlikely that the presence of female indirectly increases aggression by first increasing proximity. Unlike food, the presence of females also leads to large increases in locomotion and in male-female courtship behaviors, suggesting that females may influence aggression as well as general arousal. Female cuticular hydrocarbons are required for this effect, as females that do not produce CH pheromones are unable to promote male-male aggression. In particular, 7,11-HD––a female-specific cuticular hydrocarbon pheromone critical for male-female courtship––is sufficient to mediate this effect when it is perfumed onto pheromone-deficient females or males. Recent studies showed that ppk23+ GRNs label two population of GRNs, one of which detects male cuticular hydrocarbons and another labeled by ppk23 and ppk25, which detects female cuticular hydrocarbons. I show that in particular, both of these GRNs control aggression, presumably via detection of female or male pheromones. To further investigate the ways in which these two classes of GRNs control aggression, I developed new genetic tools to independently test the male- and female-sensing GRNs. I show that ppk25-LexA and ppk25-GAL80 faithfully recapitulate the expression pattern of ppk25-GAL4 and label a subset of ppk23+ GRNs. These tools can be used in future studies to dissect the respective functions of male-sensing and female-sensing GRNs in male social behaviors.
Finally, in the last chapter, I discuss quantitative approaches to describe how varying quantities of food and females could control the level of aggression. Flies show an inverse-U shaped aggressive response to varying quantities of food and a flat aggressive response to varying quantities of females. I show how two simple game theoretic models, “prisoner’s dilemma” and “coordination game” could be used to describe the level of aggression we observe. These results suggest that flies may use strategic decision-making, using simple comparisons of costs and benefits.
In conclusion, male-male aggression in Drosophila is controlled by simple gustatory cues from food and females, which are detected by gustatory receptor neurons. Different quantities of resource cues lead to different levels of aggression, and flies show putative territorial behavior, suggesting that fly aggression is a highly strategic adaptive behavior. How these resource cues are integrated with male pheromone cues and give rise to this complex behavior is an interesting subject, which should keep researchers busy in the coming years.
Resumo:
We develop a logarithmic potential theory on Riemann surfaces which generalizes logarithmic potential theory on the complex plane. We show the existence of an equilibrium measure and examine its structure. This leads to a formula for the structure of the equilibrium measure which is new even in the plane. We then use our results to study quadrature domains, Laplacian growth, and Coulomb gas ensembles on Riemann surfaces. We prove that the complement of the support of the equilibrium measure satisfies a quadrature identity. Furthermore, our setup allows us to naturally realize weak solutions of Laplacian growth (for a general time-dependent source) as an evolution of the support of equilibrium measures. When applied to the Riemann sphere this approach unifies the known methods for generating interior and exterior Laplacian growth. We later narrow our focus to a special class of quadrature domains which we call Algebraic Quadrature Domains. We show that many of the properties of quadrature domains generalize to this setting. In particular, the boundary of an Algebraic Quadrature Domain is the inverse image of a planar algebraic curve under a meromorphic function. This makes the study of the topology of Algebraic Quadrature Domains an interesting problem. We briefly investigate this problem and then narrow our focus to the study of the topology of classical quadrature domains. We extend the results of Lee and Makarov and prove (for n ≥ 3) c ≤ 5n-5, where c and n denote the connectivity and degree of a (classical) quadrature domain. At the same time we obtain a new upper bound on the number of isolated points of the algebraic curve corresponding to the boundary and thus a new upper bound on the number of special points. In the final chapter we study Coulomb gas ensembles on Riemann surfaces.
Resumo:
Let F(θ) be a separable extension of degree n of a field F. Let Δ and D be integral domains with quotient fields F(θ) and F respectively. Assume that Δ ᴝ D. A mapping φ of Δ into the n x n D matrices is called a Δ/D rep if (i) it is a ring isomorphism and (ii) it maps d onto dIn whenever d ϵ D. If the matrices are also symmetric, φ is a Δ/D symrep.
Every Δ/D rep can be extended uniquely to an F(θ)/F rep. This extension is completely determined by the image of θ. Two Δ/D reps are called equivalent if the images of θ differ by a D unimodular similarity. There is a one-to-one correspondence between classes of Δ/D reps and classes of Δ ideals having an n element basis over D.
The condition that a given Δ/D rep class contain a Δ/D symrep can be phrased in various ways. Using these formulations it is possible to (i) bound the number of symreps in a given class, (ii) count the number of symreps if F is finite, (iii) establish the existence of an F(θ)/F symrep when n is odd, F is an algebraic number field, and F(θ) is totally real if F is formally real (for n = 3 see Sapiro, “Characteristic polynomials of symmetric matrices” Sibirsk. Mat. Ž. 3 (1962) pp. 280-291), and (iv) study the case D = Z, the integers (see Taussky, “On matrix classes corresponding to an ideal and its inverse” Illinois J. Math. 1 (1957) pp. 108-113 and Faddeev, “On the characteristic equations of rational symmetric matrices” Dokl. Akad. Nauk SSSR 58 (1947) pp. 753-754).
The case D = Z and n = 2 is studied in detail. Let Δ’ be an integral domain also having quotient field F(θ) and such that Δ’ ᴝ Δ. Let φ be a Δ/Z symrep. A method is given for finding a Δ’/Z symrep ʘ such that the Δ’ ideal class corresponding to the class of ʘ is an extension to Δ’ of the Δ ideal class corresponding to the class of φ. The problem of finding all Δ/Z symreps equivalent to a given one is studied.
Resumo:
Two new phenomena have been observed in Mössbauer spectra: a temperature-dependent shift of the center of gravity of the spectrum, and an asymmetric broadening of the spectrum peaks. Both phenomena were observed in thulium salts. In the temperature range 1˚K ≤ T ≤ 5˚K the observed shift has an approximate inverse temperature dependence. We explain this on the basis of a Van Vleck type of interaction between the magnetic moment of two nearly degenerate electronic levels and the magnetic moment of the nucleus. From the size of the shift we are able to deduce an “effective magnetic field” H = (6.0 ± 0.1) x 106 Gauss, which is proportional to ‹r-3›M‹G|J|E› where ‹r-3›M is an effective magnetic radial integral for the 4f electrons and |G› and |E› are the lowest 4f electronic states in Tm Cl3·6H2O. From the temperature dependence of the shift we have derived a preliminary value of 1 cm-1 for the splitting of these two states. The observed asymmetric line broadening is independent of temperature in the range 1˚K ≤ T ≤ 5˚K, but is dependent on the concentration of thulium ions in the crystal. We explain this broadening on the basis of spin-spin interactions between thulium ions. From size and concentration dependence of the broadening we are able to deduce a spin-spin relaxation time for Tm Cl3·6H2O of the order of 10-11 sec.
Resumo:
The Maxwell integral equations of transfer are applied to a series of problems involving flows of arbitrary density gases about spheres. As suggested by Lees a two sided Maxwellian-like weighting function containing a number of free parameters is utilized and a sufficient number of partial differential moment equations is used to determine these parameters. Maxwell's inverse fifth-power force law is used to simplify the evaluation of the collision integrals appearing in the moment equations. All flow quantities are then determined by integration of the weighting function which results from the solution of the differential moment system. Three problems are treated: the heat-flux from a slightly heated sphere at rest in an infinite gas; the velocity field and drag of a slowly moving sphere in an unbounded space; the velocity field and drag torque on a slowly rotating sphere. Solutions to the third problem are found to both first and second-order in surface Mach number with the secondary centrifugal fan motion being of particular interest. Singular aspects of the moment method are encountered in the last two problems and an asymptotic study of these difficulties leads to a formal criterion for a "well posed" moment system. The previously unanswered question of just how many moments must be used in a specific problem is now clarified to a great extent.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.