27 resultados para D. NBLC model

em CaltechTHESIS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.

We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.

Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Threefold symmetric Fe phosphine complexes have been used to model the structural and functional aspects of biological N2 fixation by nitrogenases. Low-valent bridging Fe-S-Fe complexes in the formal oxidation states Fe(II)Fe(II), Fe(II)/Fe(I), and Fe(I)/Fe(I) have been synthesized which display rich spectroscopic and magnetic behavior. A series of cationic tris-phosphine borane (TPB) ligated Fe complexes have been synthesized and been shown to bind a variety of nitrogenous ligands including N2H4, NH3, and NH2-. These complexes are all high spin S = 3/2 and display EPR and magnetic characteristics typical of this spin state. Furthermore, a sequential protonation and reduction sequence of a terminal amide results in loss of NH3 and uptake of N2. These stoichiometric transformations represent the final steps in potential N2 fixation schemes.

Treatment of an anionic FeN2 complex with excess acid also results in the formation of some NH3, suggesting the possibility of a catalytic cycle for the conversion of N2 to NH3 mediated by Fe. Indeed, use of excess acid and reductant results in the formation of seven equivalents of NH3 per Fe center, demonstrating Fe mediated catalytic N2 fixation with acids and protons for the first time. Numerous control experiments indicate that this catalysis is likely being mediated by a molecular species.

A number of other phosphine ligated Fe complexes have also been tested for catalysis and suggest that a hemi-labile Fe-B interaction may be critical for catalysis. Additionally, various conditions for the catalysis have been investigated. These studies further support the assignment of a molecular species and delineate some of the conditions required for catalysis.

Finally, combined spectroscopic studies have been performed on a putative intermediate for catalysis. These studies converge on an assignment of this new species as a hydrazido(2-) complex. Such species have been known on group 6 metals for some time, but this represents the first characterization of this ligand on Fe. Further spectroscopic studies suggest that this species is present in catalytic mixtures, which suggests that the first steps of a distal mechanism for N2 fixation are feasible in this system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A set of coupled-channel differential equations based on a rotationally distorted optical potential is used to calculate the wave functions required to evaluate the gamma ray transition rate from the first excited state to the ground state in ^(13)C and ^(13)N. The bremsstrahlung differential cross section of low energy protons is also calculated and compared with existing data. The marked similarity between the potentials determined at each resonance level in both nuclei supports the hypothesis of the charge symmetry of nuclear forces by explaining the deviation of the ratios of the experimental E1 transition strengths from unity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation consists of two parts. The first part presents an explicit procedure for applying multi-Regge theory to production processes. As an illustrative example, the case of three body final states is developed in detail, both with respect to kinematics and multi-Regge dynamics. Next, the experimental consistency of the multi-Regge hypothesis is tested in a specific high energy reaction; the hypothesis is shown to provide a good qualitative fit to the data. In addition, the results demonstrate a severe suppression of double Pomeranchon exchange, and show the coupling of two "Reggeons" to an external particle to be strongly damped as the particle's mass increases. Finally, with the use of two body Regge parameters, order of magnitude estimates of the multi-Regge cross section for various reactions are given.

The second part presents a diffraction model for high energy proton-proton scattering. This model developed by Chou and Yang assumes high energy elastic scattering results from absorption of the incident wave into the many available inelastic channels, with the absorption proportional to the amount of interpenetrating hadronic matter. The assumption that the hadronic matter distribution is proportional to the charge distribution relates the scattering amplitude for pp scattering to the proton form factor. The Chou-Yang model with the empirical proton form factor as input is then applied to calculate a high energy, fixed momentum transfer limit for the scattering cross section, This limiting cross section exhibits the same "dip" or "break" structure indicated in present experiments, but falls significantly below them in magnitude. Finally, possible spin dependence is introduced through a weak spin-orbit type term which gives rather good agreement with pp polarization data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations of the highly non-linear model produce a 35-60 Hz modulated activity, which is coherent across the bulb. The decision states (for the odor information) in this system can be thought of as stable cycles, rather than as point stable states typical of simpler neuro-computing models. Analysis shows that a group of coupled non-linear oscillators are responsible for the oscillatory activities. The output oscillation pattern of the bulb is determined by the odor input. The model provides a framework in which to understand the transformation between odor input and bulbar output to the olfactory cortex. This model can also be extended to other brain areas such as the hippocampus, thalamus, and neocortex, which show oscillatory neural activities. There is significant correspondence between the model behavior and observed electrophysiology.

It has also been suggested that the olfactory bulb, the first processing center after the sensory cells in the olfactory pathway, plays a role in olfactory adaptation, odor sensitivity enhancement by motivation, and other olfactory psychophysical phenomena. The input from the higher olfactory centers to the inhibitory cells in the bulb are shown to be able to modulate the response, and thus the sensitivity, of the bulb to odor input. It follows that the bulb can decrease its sensitivity to a pre-existing and detected odor (adaptation) while remaining sensitive to new odors, or can increase its sensitivity to discover interesting new odors. Other olfactory psychophysical phenomena such as cross-adaptation are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.

For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.

For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.

For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence for the stereochemical isomerization of a variety of ansa metallocene compounds is presented. For the scandocene allyl derivatives described here, we have established that the process is promoted by a variety of salts in both ether and hydrocarbon solvents and is not accelerated by light. A plausible mechanism based on an earlier proposal by Marks, et al., is offered as an explanation of this process. It involves coordination of anions and/or donor solvents to the metal center with cation assistance to encourage metalcyclopentadienyl bond heterolysis, rotation about the Si-Cp bond of the detached cyclopentadienide and recoordination of the opposite face. Our observations in some cases of thermodynamic racemic:meso ratios under the reaction conditions commonly used for the synthesis of the metallocene chlorides suggests that the interchange is faster than metallation, such that the composition of the reaction mixture is determined by thermodynamic, not kinetic, control in these cases.

Two new ansa-scandocene alkenyl compounds react with olefins resulting in the formation of η3-allyl complexes. Kinetics and labeling experiments indicate a tuck-in intermediate on the reaction pathway; in this intermediate the metal is bound to the carbon adjacent to the silyllinker in the rear of the metallocene wedge. In contrast, reaction of permethylscandocene alkenyl compounds with olefins results, almost exclusively, in vinylic C-H bond activation. It is proposed that relieving transition state steric interactions between the cyclopentadienyl rings and the olefin by either linking the rings together or using a larger lanthanide metal may allow for olefin coordination, stabilizing the transition state for allylic σ-bond metathesis.

A selectively isotopically labeled propylene, CH2CD(13CH3), was synthesized and its polymerization was carried out at low concentration in toluene solution using isospecific metallocene catalysts. Analysis of the NMR spectra (13C, 1H, and 2H) of the resultant polymers revealed that the production of stereoerrors through chain epimerization proceeds exclusively by the tertiaryalkyl mechanism. Additionally, enantiofacial inversion of the terminally unsaturated polymer chain occurs by a non-dissociative process. The implications of these results on the mechanism of olefin polymerization with these catalysts is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.

The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.

The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The σD values of nitrated cellulose from a variety of trees covering a wide geographic range have been measured. These measurements have been used to ascertain which factors are likely to cause σD variations in cellulose C-H hydrogen.

It is found that a primary source of tree σD variation is the σD variation of the environmental precipitation. Superimposed on this are isotopic variations caused by the transpiration of the leaf water incorporated by the tree. The magnitude of this transpiration effect appears to be related to relative humidity.

Within a single tree, it is found that the hydrogen isotope variations which occur for a ring sequence in one radial direction may not be exactly the same as those which occur in a different direction. Such heterogeneities appear most likely to occur in trees with asymmetric ring patterns that contain reaction wood. In the absence of reaction wood such heterogeneities do not seem to occur. Thus, hydrogen isotope analyses of tree ring sequences should be performed on trees which do not contain reaction wood.

Comparisons of tree σD variations with variations in local climate are performed on two levels: spatial and temporal. It is found that the σD values of 20 North American trees from a wide geographic range are reasonably well-correlated with the corresponding average annual temperature. The correlation is similar to that observed for a comparison of the σD values of annual precipitation of 11 North American sites with annual temperature. However, it appears that this correlation is significantly disrupted by trees which grew on poorly drained sites such as those in stagnant marshes. Therefore, site selection may be important in choosing trees for climatic interpretation of σD values, although proper sites do not seem to be uncommon.

The measurement of σD values in 5-year samples from the tree ring sequences of 13 trees from 11 North American sites reveals a variety of relationships with local climate. As it was for the spatial σD vs climate comparison, site selection is also apparently important for temporal tree σD vs climate comparisons. Again, it seems that poorly-drained sites are to be avoided. For nine trees from different "well-behaved" sites, it was found that the local climatic variable best related to the σD variations was not the same for all sites.

Two of these trees showed a strong negative correlation with the amount of local summer precipitation. Consideration of factors likely to influence the isotopic composition of summer rain suggests that rainfall intensity may be important. The higher the intensity, the lower the σD value. Such an effect might explain the negative correlation of σD vs summer precipitation amount for these two trees. A third tree also exhibited a strong correlation with summer climate, but in this instance it was a positive correlation of σD with summer temperature.

The remaining six trees exhibited the best correlation between σD values and local annual climate. However, in none of these six cases was it annual temperature that was the most important variable. In fact annual temperature commonly showed no relationship at all with tree σD values. Instead, it was found that a simple mass balance model incorporating two basic assumptions yielded parameters which produced the best relationships with tree σD values. First, it was assumed that the σD values of these six trees reflected the σD values of annual precipitation incorporated by these trees. Second, it was assumed that the σD value of the annual precipitation was a weighted average of two seasonal isotopic components: summer and winter. Mass balance equations derived from these assumptions yielded combinations of variables that commonly showed a relationship with tree σD values where none had previously been discerned.

It was found for these "well-behaved" trees that not all sample intervals in a σD vs local climate plot fell along a well-defined trend. These departures from the local σD VS climate norm were defined as "anomalous". Some of these anomalous intervals were common to trees from different locales. When such widespread commonalty of an anomalous interval occurred, it was observed that the interval corresponded to an interval in which drought had existed in the North American Great Plains.

Consequently, there appears to be a combination of both local and large scale climatic information in the σD variations of tree cellulose C-H hydrogen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding friction and adhesion in static and sliding contact of surfaces is important in numerous physical phenomena and technological applications. Most surfaces are rough at the microscale, and thus the real area of contact is only a fraction of the nominal area. The macroscopic frictional and adhesive response is determined by the collective behavior of the population of evolving and interacting microscopic contacts. This collective behavior can be very different from the behavior of individual contacts. It is thus important to understand how the macroscopic response emerges from the microscopic one. In this thesis, we develop a theoretical and computational framework to study the collective behavior. Our philosophy is to assume a simple behavior of a single asperity and study the collective response of an ensemble. Our work bridges the existing well-developed studies of single asperities with phenomenological laws that describe macroscopic rate-and-state behavior of frictional interfaces. We find that many aspects of the macroscopic behavior are robust with respect to the microscopic response. This explains why qualitatively similar frictional features are seen for a diverse range of materials. We first show that the collective response of an ensemble of one-dimensional independent viscoelastic elements interacting through a mean field reproduces many qualitative features of static and sliding friction evolution. The resulting macroscopic behavior is different from the microscopic one: for example, even if each contact is velocity-strengthening, the macroscopic behavior can be velocity-weakening. The framework is then extended to incorporate three-dimensional rough surfaces, long- range elastic interactions between contacts, and time-dependent material behaviors such as viscoelasticity and viscoplasticity. Interestingly, the mean field behavior dominates and the elastic interactions, though important from a quantitative perspective, do not change the qualitative macroscopic response. Finally, we examine the effect of adhesion on the frictional response as well as develop a force threshold model for adhesion and mode I interfacial cracks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The molecular inputs necessary for cell behavior are vital to our understanding of development and disease. Proper cell behavior is necessary for processes ranging from creating one’s face (neural crest migration) to spreading cancer from one tissue to another (invasive metastatic cancers). Identifying the genes and tissues involved in cell behavior not only increases our understanding of biology but also has the potential to create targeted therapies in diseases hallmarked by aberrant cell behavior.

A well-characterized model system is key to determining the molecular and spatial inputs necessary for cell behavior. In this work I present the C. elegans uterine seam cell (utse) as an ideal model for studying cell outgrowth and shape change. The utse is an H-shaped cell within the hermaphrodite uterus that functions in attaching the uterus to the body wall. Over L4 larval stage, the utse grows bidirectionally along the anterior-posterior axis, changing from an ellipsoidal shape to an elongated H-shape. Spatially, the utse requires the presence of the uterine toroid cells, sex muscles, and the anchor cell nucleus in order to properly grow outward. Several gene families are involved in utse development, including Trio, Nav, Rab GTPases, Arp2/3, as well as 54 other genes found from a candidate RNAi screen. The utse can be used as a model system for studying metastatic cancer. Meprin proteases are involved in promoting invasiveness of metastatic cancers and the meprin-likw genes nas-21, nas-22, and toh-1 act similarly within the utse. Studying nas-21 activity has also led to the discovery of novel upstream inhibitors and activators as well as targets of nas-21, some of which have been characterized to affect meprin activity. This illustrates that the utse can be used as an in vivo model for learning more about meprins, as well as various other proteins involved in metastasis.