946 resultados para Body Center of Gravity.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Papaseit et al. (Proc. Nati. Acad. Sci. U.S.A. 97, 8364, 2000) showed the decisive role of gravity in the formation of patterns by assemblies of microtubules in vitro. By virtue of a functional scaling, the free energy for MT systems in a gravitational field was constructed. The influence of the gravitational field on MT's self-organization process, that can lead to the isotropic to nematic phase transition, is the focus of this paper. A coupling of a concentration gradient with orientational order characteristic of nernatic ordering pattern formation is the new feature emerging in the presence of gravity. The concentration range corresponding to a phase coexistence region increases with increasing g or NIT concentration. Gravity facilitates the isotropic to nernatic phase transition leading to a significantly broader transition region. The phase transition represents the interplay between the growth in the isotropic phase and the precipitation into the nematic phase. We also present and discuss the numerical results obtained for local NIT concentration change with the height of the vessel, order parameter and phase transition properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes a new simulation methodology in which variable density turbulent flows can be studied in the context of a mixing layer with or without the presence of gravity. Specifically, this methodology is developed to probe the nature of non-buoyantly-driven (i.e. isotropically-driven) or buoyantly-driven mixing deep inside a mixing layer. Numerical forcing methods are incorporated into both the velocity and scalar fields, which extends the length of time over which mixing physics can be studied. The simulation framework is designed to allow for independent variation of four non-dimensional parameters, including the Reynolds, Richardson, Atwood, and Schmidt numbers. Additionally, the governing equations are integrated in such a way to allow for the relative magnitude of buoyant energy production and non-buoyant energy production to be varied.

The computational requirements needed to implement the proposed configuration are presented. They are justified in terms of grid resolution, order of accuracy, and transport scheme. Canonical features of turbulent buoyant flows are reproduced as validation of the proposed methodology. These features include the recovery of isotropic Kolmogorov scales under buoyant and non-buoyant conditions, the recovery of anisotropic one-dimensional energy spectra under buoyant conditions, and the preservation of known statistical distributions in the scalar field, as found in other DNS studies.

This simulation methodology is used to perform a parametric study of turbulent buoyant flows to discern the effects of varying the Reynolds, Richardson, and Atwood numbers on the resulting state of mixing. The effects of the Reynolds and Atwood numbers are isolated by looking at two energy dissipation rate conditions under non-buoyant (variable density) and constant density conditions. The effects of Richardson number are isolated by varying the ratio of buoyant energy production to total energy production from zero (non-buoyant) to one (entirely buoyant) under constant Atwood number, Schmidt number, and energy dissipation rate conditions. It is found that the major differences between non-buoyant and buoyant turbulent flows are contained in the transfer spectrum and longitudinal structure functions, while all other metrics are largely similar (e.g. energy spectra, alignment characteristics of the strain-rate tensor). Also, despite the differences noted between fully buoyant and non-buoyant turbulent fields, the scalar field, in all cases, is unchanged by these. The mixing dynamics in the scalar field are found to be insensitive to the source of turbulent kinetic energy production (non-buoyant vs. buoyant).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Book of John Mandeville, while ostensibly a pilgrimage guide documenting an English knight’s journey into the East, is an ideal text in which to study the developing concept of race in the European Middle Ages. The Mandeville-author’s sense of place and morality are inextricably linked to each other: Jerusalem is the center of his world, which necessarily forces Africa and Asia to occupy the spiritual periphery. Most inhabitants of Mandeville’s landscapes are not monsters in the physical sense, but at once startlingly human and irreconcilably alien in their customs. Their religious heresies, disordered sexual appetites, and monstrous acts of cannibalism label them as fallen state of the European Christian self. Mandeville’s monstrosities lie not in the fantastical, but the disturbingly familiar, coupling recognizable humans with a miscarriage of natural law. In using real people to illustrate the moral degeneracy of the tropics, Mandeville’s ethnography helps shed light on the missing link between medieval monsters and modern race theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis I present a study of W pair production in e+e- annihilation using fully hadronic W+W- events. Data collected by the L3 detector at LEP in 1996-1998, at collision center-of-mass energies between 161 and 189 GeV, was used in my analysis.

Analysis of the total and differential W+W- cross sections with the resulting sample of 1,932 W+W- → qqqq event candidates allowed me to make precision measurements of a number of properties of the W boson. I combined my measurements with those using other W+W- final states to obtain stringent constraints on the W boson's couplings to fermions, other gauge bosons, and scalar Higgs field by measuring the total e+e- → W+W- cross section and its energy dependence

σ(e+e- → W+W-) =

{2.68+0.98-0.67(stat.)± 0.14(syst.) pb, √s = 161.34 GeV

{12.04+1.38-1.29(stat.)± 0.23(syst.) pb, √s = 172.13 GeV

{16.45 ± 0.67(stat.) ± 0.26(syst.) pb, √s = 182.68 GeV

{16.28 ± 0.38(stat.) ± 0.26(syst.) pb, √s = 188.64 GeV

the fraction of W bosons decaying into hadrons

BR(W →qq') = 68.72 ± 0.69(stat.) ± 0.38(syst.) %,

invisible non-SM width of the W boson

ΓinvisibleW less than MeV at 95% C.L.,

the mass of the W boson

MW = 80.44 ± 0.08(stat.)± 0.06(syst.) GeV,

the total width of the W boson

ΓW = 2.18 ± 0.20(stat.)± 0.11(syst.) GeV,

the anomalous triple gauge boson couplings of the W

ΔgZ1 = 0.16+0.13-0.20(stat.) ± 0.11(syst.)

Δkγ = 0.26+0.24-0.33(stat.) ± 0.16(syst.)

λγ = 0.18+0.13-0.20(stat.) ± 0.11(syst.)

No significant deviations from Standard Model predictions were found in any of the measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study some aspects of conformal field theory, wormhole physics and two-dimensional random surfaces. Inspite of being rather different, these topics serve as examples of the issues that are involved, both at high and low energy scales, in formulating a quantum theory of gravity. In conformal field theory we show that fusion and braiding properties can be used to determine the operator product coefficients of the non-diagonal Wess-Zumino-Witten models. In wormhole physics we show how Coleman's proposed probability distribution would result in wormholes determining the value of θQCD. We attempt such a calculation and find the most probable value of θQCD to be π. This hints at a potential conflict with nature. In random surfaces we explore the behaviour of conformal field theories coupled to gravity and calculate some partition functions and correlation functions. Our results throw some light on the transition that is believed to occur when the central charge of the matter theory gets larger than one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes a measurement of B0- B0 mixing in events produced by electron-positron annihilation at a center of mass energy of 29 GeV. The data were taken by the Mark II detector in the PEP storage ring at the Stanford Linear Accelerator Center between 1981 and 1987, and correspond to a total integrated luminosity of 224pb-1.

We used a new method, based on the kinematics of hadronic events containing two leptons, to provide a measurement of the probability, x, that a hadron, initially containing a b (b) quark decays to a positive (negative) lepton to be X = 0.17+0.15-0.08, with 90% confidence level upper and lower limits of 0.38 and 0.06, respectively, including all estimated systematic errors. Because of the good separation of signal and background, this result is relatively insensitive to various systematic effects which have complicated previous measurements.

We interpret this result as evidence for the mixing of neutral B mesons. Based on existing B0d mixing rate measurements, and some assumptions about the fractions of B0d and B0s mesons present in the data, this result favors maximal mixing of B0s mesons, although it cannot rule out zero B0s mixing at the 90% confidence level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Terns and skimmers nesting on saltmarsh islands often suffer large nest losses due to tidal and storm flooding. Nests located near the center of an island and on wrack (mats of dead vegetation, mostly eelgrass Zostera) are less susceptible to flooding than those near the edge of an island and those on bare soil or in saltmarsh cordgrass (Spartina alterniflora). In the 1980’s Burger and Gochfeld constructed artificial eelgrass mats on saltmarsh islands in Ocean County, New Jersey. These mats were used as nesting substrate by common terns (Sterna hirundo) and black skimmers (Rynchops niger). Every year since 2002 I have transported eelgrass to one of their original sites to make artificial mats. This site, Pettit Island, typically supports between 125 and 200 pairs of common terns. There has often been very little natural wrack present on the island at the start of the breeding season, and in most years natural wrack has been most common along the edges of the island. The terns readily used the artificial mats for nesting substrate. Because I placed artificial mats in the center of the island, the terns have often avoided the large nest losses incurred by terns nesting in peripheral locations. However, during particularly severe flooding events even centrally located nests on mats are vulnerable. Construction of eelgrass mats represents an easy habitat manipulation that can improve the nesting success of marsh-nesting seabirds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of enzymatic activity is of great importance in the immunology of fungi. Indeed, knowledge of biological activity of antigenic structures is important for the elucidation of host-parasite relations as well as in the search for a taxonomic factor permitting differential diagnoses. The authors used Saprolegnia cultures to analyse soluble antigenic fractions arising from the mycelium of cultures of 4 species of Saprolegnia, which are found most frequently in the parasitic state on fish: S. parasitica, S. ferax, S. delica, S. diclina. The authors conclude that in the study of saprolegniasis, the enzymatic approach affords new elements for the examination of the etiology of fungi as well as an element of gravity concerning the biochemical modifications necessary to the change of saprophytism to parasitism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Environmental changes may have an impact on life conditions of the fish, e.g. food supply for the fish. The prevailing environmental conditions apply evenly to all age groups of one stock. Small fish have high growth rates, whereas large fish grow with low rates. But, it can be shown on the basis of the von Bertalanffy-growth model that it is sufficient to know only the growth rate of one single age group to compute the growth rates of all other age groups. The growth rate of a reference fish GRF (e.g. a fish with a body mass of 1 kg) was introduced as a reference growth describing the current food condition of all age groups of the stock. As an example a time series of the reference-growth rate of the northern cod stock (NAFO, 3K) was computed for the time span 1979 to 1999. For the northern cod stock it can be observed that environmental conditions caused growth rates below the long-term mean for seven years in a row. After a prolonged hunger period the fish stock collapsed in 1992 also by the impact of fisheries - and this was probably not a coincidence. Now, with the reference-growth rate GRF a simple and handy parameter was found to summarize the influence of the environmental conditions on growth and other derived models and therefore makes it easier to compute the influence of environmental changes within stock assessment. Zusammenfassung Veränderungen der Umwelt können Auswirkungen auf die Lebensbedingungen der Fische haben, z. B. auf das Nahrungsangebot der Fische. Die vorherrschenden Umgebungsbedingungen wirken gleichmäßig auf alle Altersgruppen eines Bestandes, wobei typischer Weise kleineFische hohe Wachstumsraten haben, während die großen Fische mit niedrigen Raten wachsen. Auf der Grundlage des von Bertalanffy-Wachstumsmodells kann gezeigt werden, dass es ausreicht, nur die Wachstumsrate von einer einzigen Altersgruppe zu kennen, um die Wachstumsraten von allen anderen Altersgruppen berechnen zu können. Die Wachstumsrate eines Referenz-Fisches (z.B. eines Fisches mit einer Körpermasse von 1 kg) wurde als Referenz-Wachstum GRF eingeführt, die den aktuellen Zustand des Nahrungsangebots füralle Altersgruppen des Bestandes beschreibt. Als Beispiel wurde einer Zeitreihe der Referenz-Wachstumsraten des nördlichen Kabeljaubestandes (NAFO, 3K) für die Zeitsraum 1979 bis 1999 berechnet. Für diesen Kabeljaubestand war zu beobachten, dass Umgebungsbedingungen für sieben Jahre in Folge Wachstumsraten unter dem langjährigen Mittelwert verursachten. Nach einer längeren Hungerperiode kollabierte dieser Fischbestand im Jahr 1992 auch durch den Einfluß der Fischerei - und dies war sicher kein Zufall. Jetzt, mit der Referenz-Wachstumsrate GRF, ist ein einfacher und handlicher Parameter gefunden, der es gestattet den Einfluss der Umweltbedingungen auf die Wachstumsbedingungen und andere davon abgeleitete Modelle zusammenzufassen. Dies macht es einfach, den Einfluss von Umweltveränderungen innerhalb der Bestandsabschätzungen zu berechnen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An attempt is made to provide a theoretical explanation of the effect of the positive column on the voltage-current characteristic of a glow or an arc discharge. Such theories have been developed before, and all are based on balancing the production and loss of charged particles and accounting for the energy supplied to the plasma by the applied electric field. Differences among the theories arise from the approximations and omissions made in selecting processes that affect the particle and energy balances. This work is primarily concerned with the deviation from the ambipolar description of the positive column caused by space charge, electron-ion volume recombination, and temperature inhomogeneities.

The presentation is divided into three parts, the first of which involved the derivation of the final macroscopic equations from kinetic theory. The final equations are obtained by taking the first three moments of the Boltzmann equation for each of the three species in the plasma. Although the method used and the equations obtained are not novel, the derivation is carried out in detail in order to appraise the validity of numerous approximations and to justify the use of data from other sources. The equations are applied to a molecular hydrogen discharge contained between parallel walls. The applied electric field is parallel to the walls, and the dependent variables—electron and ion flux to the walls, electron and ion densities, transverse electric field, and gas temperature—vary only in the direction perpendicular to the walls. The mathematical description is given by a sixth-order nonlinear two-point boundary value problem which contains the applied field as a parameter. The amount of neutral gas and its temperature at the walls are held fixed, and the relation between the applied field and the electron density at the center of the discharge is obtained in the process of solving the problem. This relation corresponds to that between current and voltage and is used to interpret the effect of space charge, recombination, and temperature inhomogeneities on the voltage-current characteristic of the discharge.

The complete solution of the equations is impractical both numerically and analytically, and in Part II the gas temperature is assumed uniform so as to focus on the combined effects of space charge and recombination. The terms representing these effects are treated as perturbations to equations that would otherwise describe the ambipolar situation. However, the term representing space charge is not negligible in a thin boundary layer or sheath near the walls, and consequently the perturbation problem is singular. Separate solutions must be obtained in the sheath and in the main region of the discharge, and the relation between the electron density and the applied field is not determined until these solutions are matched.

In Part III the electron and ion densities are assumed equal, and the complicated space-charge calculation is thereby replaced by the ambipolar description. Recombination and temperature inhomogeneities are both important at high values of the electron density. However, the formulation of the problem permits a comparison of the relative effects, and temperature inhomogeneities are shown to be important at lower values of the electron density than recombination. The equations are solved by a direct numerical integration and by treating the term representing temperature inhomogeneities as a perturbation.

The conclusions reached in the study are primarily concerned with the association of the relation between electron density and axial field with the voltage-current characteristic. It is known that the effect of space charge can account for the subnormal glow discharge and that the normal glow corresponds to a close approach to an ambipolar situation. The effect of temperature inhomogeneities helps explain the decreasing characteristic of the arc, and the effect of recombination is not expected to appear except at very high electron densities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We measured the recoil proton polarization in the process γp → pη at the 1.5 GeV Caltech electron synchrotron, at photon energies from 0.8 to 1.1 GeV, and at center-of-mass production angles around 90°. A counter-spark chamber array was used to determine the kinematics of all particles in the final state of the partial mode γp → pη (η → 2γ). The protons' polarization was determined by measuring an asymmetry in scattering off carbon. Analysis of 280,000 pictures yielded 2400 useful scatters with a background which was 30% of the foreground. The polarization results show a sizeable opposite parity interference at 830 MeV, 950 MeV, and 1100 MeV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Works devoted to the influence of starvation on temperature selection by fishes are few and their conclusions are contradictory. This study determined the influence of brief, up to 14 days, starvation on temperature selection by young fishes. The experiments were carried out in August-September 1976 on fingerling bream (Abramis brama L.), roach (Rutilus rutilus L.) and perch (Perca fluviatilis L.) with body lengths of 3-5 cm and weight 0.5-1.2 g. The young fish were caught in the littoral by seine-nets or small drag-nets. Immediately after catching the fish they were put in acclimatization boxes. The period of acclimatization did not exceed 2 days for bream and roach at a temperature of 20 °C and 6 days for perch at 17 °C. Before the start of the experiment and for the first 10 days of the experiment the fish were fed with oligochaetes, earthworms and daphnia, after that feeding discontinued. At the end of a 10-14 day period the giving of food was resumed. The study concludes that the experiments have shown that in the summer season the factor of starvation significantly changes the reaction to the gradient of temperature in young cyprihids - roach and bream.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The span of the bridge was assumed as 100 feet. The type of bridge used is the timber Howe Truss. The height of truss was taken as 20 feet between center lines of top and bottom chords. The width was taken as 18 feet center to center of trusses. The truss was divided up into five panels 20 feet long.

It was designed according to the "General Specifications for Steel Highway Bridges" by Ketchum. For the live load for the floor and its supports, a load of 80 pounds per square foot of total floor surface or a 15 ton traction engine with axles 10 feet centers and 6 feet gage, two thirds of load to be carried by rear axles.

For the truss a load of 75 pounds per square foot of floor surface.

For the wind load the bottom lateral bracing is to be designed to resist a lateral wind load of 300 pounds per foot of span; 150 pounds of this to be treated as a moving load.

The top lateral bracing is to be designed to resist a lateral wind force of 150 pounds per foot of span.

The timber to be used in the bridge is to be Douglas fir.

The unit stresses used for timber are those of the American Railway Engineering Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.

A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.

The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?

To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.

To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.