993 resultados para Physical quantities
Resumo:
The studies reported were undertaken as part of a wide environmental feasibility study for the establishment of a modern sewage system in Freetown. The aim of this part of the study was to determine whether the hydrological regime of the Sierra Leone River Estuary would permit the large-scale introduction of sewage into the estuary without damaging the environment. The important factors were whether: 1) there would be sufficient dilution of the sewage; 2) fleatable particles or other substances would create significant adverse effects in the estuarine ecosystem. The outfall sites are described together with the sampling stations, methods and analyses. Results include: 1) T/S profiles; 2) chemical analysis of the water. A review of literature on the Sierra Leone River Estuary is included which provides information on the plankton, benthos and fisheries. Results suggest that at certain points where local circulations occur it would be inadvisable to locate untreated sewage outfalls. Such points are frequently observed in small embayments. These studies have been of short duration but the data can serve as baseline for more extended investigations which would give a more complete picture of the seasonal patterns in the estuary.
Resumo:
This thesis covers a range of topics in numerical and analytical relativity, centered around introducing tools and methodologies for the study of dynamical spacetimes. The scope of the studies is limited to classical (as opposed to quantum) vacuum spacetimes described by Einstein's general theory of relativity. The numerical works presented here are carried out within the Spectral Einstein Code (SpEC) infrastructure, while analytical calculations extensively utilize Wolfram's Mathematica program.
We begin by examining highly dynamical spacetimes such as binary black hole mergers, which can be investigated using numerical simulations. However, there are difficulties in interpreting the output of such simulations. One difficulty stems from the lack of a canonical coordinate system (henceforth referred to as gauge freedom) and tetrad, against which quantities such as Newman-Penrose Psi_4 (usually interpreted as the gravitational wave part of curvature) should be measured. We tackle this problem in Chapter 2 by introducing a set of geometrically motivated coordinates that are independent of the simulation gauge choice, as well as a quasi-Kinnersley tetrad, also invariant under gauge changes in addition to being optimally suited to the task of gravitational wave extraction.
Another difficulty arises from the need to condense the overwhelming amount of data generated by the numerical simulations. In order to extract physical information in a succinct and transparent manner, one may define a version of gravitational field lines and field strength using spatial projections of the Weyl curvature tensor. Introduction, investigation and utilization of these quantities will constitute the main content in Chapters 3 through 6.
For the last two chapters, we turn to the analytical study of a simpler dynamical spacetime, namely a perturbed Kerr black hole. We will introduce in Chapter 7 a new analytical approximation to the quasi-normal mode (QNM) frequencies, and relate various properties of these modes to wave packets traveling on unstable photon orbits around the black hole. In Chapter 8, we study a bifurcation in the QNM spectrum as the spin of the black hole a approaches extremality.
Resumo:
Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.
Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.
In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.
This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.
Resumo:
A summary of previous research is presented that indicates that the purpose of a blue copper protein's fold and hydrogen bond network, aka, the rack effect, enforce a copper(II) geometry around the copper(I) ion in the metal site. In several blue copper proteins, the C-terminal histidine ligand becomes protonated and detaches from the copper in the reduced forms. Mutants of amicyanin from Paracoccus denitrificans were made to alter the hydrogen bond network and quantify the rack effect by pKa shifts.
The pKa's of mutant amicyanins have been measured by pH-dependent electrochemistry. P94F and P94A mutations loosen the Northern loop, allowing the reduced copper to adopt a relaxed conformation: the ability to relax drives the reduction potentials up. The measured potentials are 265 (wild type), 380 (P94A), and 415 (P94F) mV vs. NHE. The measured pKa's are 7.0 (wild type), 6.3 (P94A), and 5.0 (P94F). The additional hydrogen bond to the thiolate in the mutants is indicated by a red-shift in the blue copper absorption and an increase in the parallel hyperfine splitting in the EPR spectrum. This hydrogen bond is invoked as the cause for the increased stability of the C-terminal imidazole.
Melting curves give a measure of the thermal stability of the protein. A thermodynamic intermediate with pH-dependent reversibility is revealed. Comparisons with the electrochemistry and apoamicyanin suggest that the intermediate involves the region of the protein near the metal site. This region is destabilized in the P94F mutant; coupled with the evidence that the imidazole is stabilized under the same conditions confirms an original concept of the rack effect: a high energy configuration is stabilized at a cost to the rest of the protein.
Resumo:
6 p.
Resumo:
Algorithmic DNA tiles systems are fascinating. From a theoretical perspective, they can result in simple systems that assemble themselves into beautiful, complex structures through fundamental interactions and logical rules. As an experimental technique, they provide a promising method for programmably assembling complex, precise crystals that can grow to considerable size while retaining nanoscale resolution. In the journey from theoretical abstractions to experimental demonstrations, however, lie numerous challenges and complications.
In this thesis, to examine these challenges, we consider the physical principles behind DNA tile self-assembly. We survey recent progress in experimental algorithmic self-assembly, and explain the simple physical models behind this progress. Using direct observation of individual tile attachments and detachments with an atomic force microscope, we test some of the fundamental assumptions of the widely-used kinetic Tile Assembly Model, obtaining results that fit the model to within error. We then depart from the simplest form of that model, examining the effects of DNA sticky end sequence energetics on tile system behavior. We develop theoretical models, sequence assignment algorithms, and a software package, StickyDesign, for sticky end sequence design.
As a demonstration of a specific tile system, we design a binary counting ribbon that can accurately count from a programmable starting value and stop growing after overflowing, resulting in a single system that can construct ribbons of precise and programmable length. In the process of designing the system, we explain numerous considerations that provide insight into more general tile system design, particularly with regards to tile concentrations, facet nucleation, the construction of finite assemblies, and design beyond the abstract Tile Assembly Model.
Finally, we present our crystals that count: experimental results with our binary counting system that represent a significant improvement in the accuracy of experimental algorithmic self-assembly, including crystals that count perfectly with 5 bits from 0 to 31. We show some preliminary experimental results on the construction of our capping system to stop growth after counters overflow, and offer some speculation on potential future directions of the field.
Resumo:
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Resumo:
Understanding the roles of microorganisms in environmental settings by linking phylogenetic identity to metabolic function is a key challenge in delineating their broad-scale impact and functional diversity throughout the biosphere. This work addresses and extends such questions in the context of marine methane seeps, which represent globally relevant conduits for an important greenhouse gas. Through the application and development of a range of culture-independent tools, novel habitats for methanotrophic microbial communities were identified, established settings were characterized in new ways, and potential past conditions amenable to methane-based metabolism were proposed. Biomass abundance and metabolic activity measures – both catabolic and anabolic – demonstrated that authigenic carbonates associated with seep environments retain methanotrophic activity, not only within high-flow seep settings but also in adjacent locations exhibiting no visual evidence of chemosynthetic communities. Across this newly extended habitat, microbial diversity surveys revealed archaeal assemblages that were shaped primarily by seepage activity level and bacterial assemblages influenced more substantially by physical substrate type. In order to reliably measure methane consumption rates in these and other methanotrophic settings, a novel method was developed that traces deuterium atoms from the methane substrate into aqueous medium and uses empirically established scaling factors linked to radiotracer rate techniques to arrive at absolute methane consumption values. Stable isotope probing metaproteomic investigations exposed an array of functional diversity both within and beyond methane oxidation- and sulfate reduction-linked metabolisms, identifying components of each proposed enzyme in both pathways. A core set of commonly occurring unannotated protein products was identified as promising targets for future biochemical investigation. Physicochemical and energetic principles governing anaerobic methane oxidation were incorporated into a reaction transport model that was applied to putative settings on ancient Mars. Many conditions enabled exergonic model reactions, marking the metabolism and its attendant biomarkers as potentially promising targets for future astrobiological investigations. This set of inter-related investigations targeting methane metabolism extends the known and potential habitat of methanotrophic microbial communities and provides a more detailed understanding of their activity and functional diversity.
Resumo:
When salmonid redds are disrupted by spates, the displaced eggs will drift downstream. The mean distance of travel, the types of locations in which the eggs resettle and the depth of reburial of displaced eggs are not known. Investigation of these topics under field conditions presents considerable practical problems, though the use of artificial eggs might help to overcome some of them. Attempts to assess the similarities and/or differences in performance between real and artificial eggs are essential before artificial eggs can validly be used to simulate real eggs. The present report first compares the two types of egg in terms of their measurable physical characteristics (e.g. dimensions and density). The rate at which eggs fall in still water will relate to the rate at which they are likely to resettle in flowing water in the field. As the rate of fall will be influenced by a number of additional factors (e.g. shape and surface texture) which are not easily measured directly, the rates of fall of the two types of egg have been compared directly under controlled conditions. Finally, comparisons of the pattern of settlement of the two types of egg in flowing water in an experimental channel have been made. Although the work was primarily aimed at testing the value of artificial eggs as a simulation of real eggs, several side issues more directly concerned with the properties of real eggs and the likely distance of drift in natural streams have also been explored. This is the first of three reports made on this topic by the author in 1984.
Resumo:
An article reviewing the work undertaken looking at the seasonal variation of chemical conditions in water at various depths in lakes. The laboratory tests undertaken for the research is outlined, as well as details of the sampling locations and the staff involved with the work. One figure shows the seasonal variation in the amounts of dissolved substances in the surface water of Windermere during 1936. Another figure shows seasonal varation inthe dry weight of phyto- and zooplankton in Windermere. Seasonal changes are discussed further and a table is included showing chemical conditions in winter and summer for Windermere.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
The objective of this investigation has been a theoretical and experimental understanding of ferromagnetic resonance phenomena in ferromagnetic thin films, and a consequent understanding of several important physical properties of these films. Significant results have been obtained by ferromagnetic resonance, hysteresis, torque magnetometer, He ion backscattering, and X-ray fluorescence measurements for nickel-iron alloy films.
Taking into account all relevant magnetic fields, including the applied, demagnetizing, effective anisotropy and exchange fields, the spin wave resonance condition applicable to the thin film geometry is presented. On the basis of the simple exchange interaction model it is concluded that the normal resonance modes of an ideal film are expected to be unpinned. The possibility of nonideality near the surface of a real film was considered by means of surface anisotropy field, inhomogeneity in demagnetizing field and inhomogeneity of magnetization models. Numerical results obtained for reasonable parameters in all cases show that they negligibly perturb the resonance fields and the higher order mode shapes from those of the unpinned modes of ideal films for thicknesses greater than 1000 Å. On the other hand for films thinner than 1000 Å the resonance field deviations can be significant even though the modes are very nearly unpinned. A previously unnoticed but important feature of all three models is that the interpretation of the first resonance mode as the uniform mode of an ideal film allows an accurate measurement of the average effective demagnetizing field over the film volume. Furthermore, it is demonstrated that it is possible to choose parameters which give indistinguishable predictions for all three models, making it difficult to uniquely ascertain the source of spin pinning in real films from resonance measurements alone.
Spin wave resonance measurements of 81% Ni-19% Fe coevaporated films 30 to 9000 Å thick, at frequencies from 1 to 8 GHz, at room temperature, and with the static magnetic field parallel and perpendicular to the film plane have been performed. A self-consistent analysis of the results for films thicker than 1000 Å, in which multiple excitations can be observed, shows for the first time that a unique value of exchange constant A can only be obtained by the use of unpinned mode assignments. This evidence and the resonance behavior of films thinner than 1000 Å strongly imply that the magnetization at the surfaces of permalloy films is very weakly pinned. However, resonance measurements alone cannot determine whether this pinning is due to a surface anisotropy, an inhomogeneous demagnetizing field or an inhomogeneous magnetization. The above analysis yields a value of 4πM=10,100 Oe and A = (1.03 ± .05) x 10-6 erg/cm for this alloy. The ability to obtain a unique value of A suggests that spin wave resonance can be used to accurately characterize the exchange interaction in a ferromagnet.
In an effort to resolve the ambiguity of the source of pinning of the magnetization, a correlation of the ratio of magnetic moment and X-ray film thickness with the value of effective demagnetizing field 4πNM as determined from resonance, for films 45 to 300 Å has been performed. The remarkable agreement of both quantities and a comparison with the predictions of five distinct models, strongly imply that the thickness dependence of both quantities is related to a thickness dependent average saturation magnetization, which is far below 10,100 Oe for very thin films. However, a series of complementary experiments shows that this large decrease of average saturation magnetization cannot be simply explained by either oxidation or interdiffusion processes. It can only be satisfactorily explained by an intrinsic decrease of the average saturation magnetization for very thin films, an effect which cannot be justified by any simple physical considerations.
Recognizing that this decrease of average saturation magnetization could be due to an oxidation process, a correlation of resonance measurements, He ion backscattering, X-ray fluorescence and torque magnetometer measurements, for films 40 to 3500 Å thick has been performed. On basis of these measurements it is unambiguously established that the oxide layer on the surface of purposefully oxidized 81% Ni-19% Fe evaporated films is predominantly Fe-oxide, and that in the oxidation process Fe atoms are removed from the bulk of the film to depths of thousands of angstroms. Extrapolation of results for pure Fe films indicates that the oxide is most likely α-Fe2O3. These conclusions are in agreement with results from old metallurgical studies of high temperature oxidation of bulk Fe and Ni-Fe alloys. However, X-ray fluorescence results for films oxidized at room temperature, show that although the preferential oxidation of Fe also takes place in these films, the extent of this process is by far too small to explain the large variation of their average saturation magnetization with film thickness.