12 resultados para Aggregate ichthyofauna
em CaltechTHESIS
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
Resumo:
Microbial sulfur cycling communities were investigated in two methane-rich ecosystems, terrestrial mud volcanoes (TMVs) and marine methane seeps, in order to investigate niches and processes that would likely be central to the functioning of these crucial ecosystems. Terrestrial mud volcanoes represent geochemically diverse habitats with varying sulfur sources and yet sulfur-cycling in these environments remains largely unexplored. Here we characterized the sulfur-metabolizing microorganisms and activity in 4 TMVs in Azerbaijan, supporting the presence of active sulfur-oxidizing and sulfate-reducing guilds in all 4 TMVs across a range of physiochemical conditions, with diversity of these guilds being unique to each TMV. We also found evidence for the anaerobic oxidation of methane coupled to sulfate reduction, a process which we explored further in the more tractable marine methane seeps. Diverse associations between methanotrophic archaea (ANME) and sulfate-reducing bacterial groups (SRB) often co-occur in marine methane seeps, however the ecophysiology of these different symbiotic associations has not been examined. Using a combination of molecular, geochemical and fluorescence in situ hybridization coupled to nano-scale secondary ion mass spectrometry (FISH-NanoSIMS) analyses of in situ seep sediments and methane-amended sediment incubations from diverse locations, we show that the unexplained diversity in SRB associated with ANME cells can be at least partially explained by preferential nitrate utilization by one particular partner, the seepDBB. This discovery reveals that nitrate is likely an important factor in community structuring and diversity in marine methane seep ecosystems. The thesis concludes with a study of the dynamics between ANME and their associated SRB partners. We inhibited sulfate reduction and followed the metabolic processes of the community as well as the effect of ANME/SRB aggregate composition and growth on a cellular level by tracking 15N substrate incorporation into biomass using FISH-NanoSIMS. We revealed that while sulfate-reducing bacteria gradually disappeared over time in incubations with an SRB inhibitor, the ANME archaea persisted in the form of ANME-only aggregates, which are capable of little to no growth when sulfate reduction is inhibited. These data suggest ANME are not able to synthesize new proteins when sulfate reduction is inhibited.
Resumo:
A unique chloroplast Signal Recognition Particle (SRP) in green plants is primarily dedicated to the post-translational targeting of light harvesting chlorophyll-a/b binding (LHC) proteins. Our study of the thermodynamics and kinetics of the GTPases of the system demonstrates that GTPase complex assembly and activation are highly coupled in the chloroplast GTPases, suggesting they may forego the GTPase activation step as a key regulatory point. This reflects adaptations of the chloroplast SRP to the delivery of their unique substrate protein. Devotion to one highly hydrophobic family of proteins also may have allowed the chloroplast SRP system to evolve an efficient chaperone in the cpSRP43 subunit. To understand the mechanism of disaggregation, we showed that LHC proteins form micellar, disc-shaped aggregates that present a recognition motif (L18) on the aggregate surface. Further molecular genetic and structure-activity analyses reveal that the action of cpSRP43 can be dissected into two steps: (i) initial recognition of L18 on the aggregate surface; and (ii) aggregate remodeling, during which highly adaptable binding interactions of cpSRP43 with hydrophobic transmembrane domains of the substrate protein compete with the packing interactions within the aggregate. We also tested the adaptability of cpSRP43 for alternative substrates, specifically in attempts to improve membrane protein expression and inhibition of amyloid beta fibrillization. These preliminary results attest to cpSRP43’s potential as a molecular chaperone and provides the impetus for further engineering endeavors to address problems that stem from protein aggregation.
Resumo:
For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.
To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.
I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.
I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.
With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.
Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.
Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.
While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.
Resumo:
A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.
A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.
The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.
Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.
The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.
Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.
PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
This work presents the development and investigation of a new type of concrete for the attenuation of waves induced by dynamic excitation. Recent progress in the field of metamaterials science has led to a range of novel composites which display unusual properties when interacting with electromagnetic, acoustic, and elastic waves. A new structural metamaterial with enhanced properties for dynamic loading applications is presented, which is named metaconcrete. In this new composite material the standard stone and gravel aggregates of regular concrete are replaced with spherical engineered inclusions. Each metaconcrete aggregate has a layered structure, consisting of a heavy core and a thin compliant outer coating. This structure allows for resonance at or near the eigenfrequencies of the inclusions, and the aggregates can be tuned so that resonant oscillations will be activated by particular frequencies of an applied dynamic loading. The activation of resonance within the aggregates causes the overall system to exhibit negative effective mass, which leads to attenuation of the applied wave motion. To investigate the behavior of metaconcrete slabs under a variety of different loading conditions a finite element slab model containing a periodic array of aggregates is utilized. The frequency dependent nature of metaconcrete is investigated by considering the transmission of wave energy through a slab, which indicates the presence of large attenuation bands near the resonant frequencies of the aggregates. Applying a blast wave loading to both an elastic slab and a slab model that incorporates the fracture characteristics of the mortar matrix reveals that a significant portion of the supplied energy can be absorbed by aggregates which are activated by the chosen blast wave profile. The transfer of energy from the mortar matrix to the metaconcrete aggregates leads to a significant reduction in the maximum longitudinal stress, greatly improving the ability of the material to resist damage induced by a propagating shock wave. The various analyses presented in this work provide the theoretical and numerical background necessary for the informed design and development of metaconcrete aggregates for dynamic loading applications, such as blast shielding, impact protection, and seismic mitigation.
Resumo:
Huntington’s disease (HD) is a fatal autosomal dominant neurodegenerative disease. HD has no cure, and patients pass away 10-20 years after the onset of symptoms. The causal mutation for HD is a trinucleotide repeat expansion in exon 1 of the huntingtin gene that leads to a polyglutamine (polyQ) repeat expansion in the N-terminal region of the huntingtin protein. Interestingly, there is a threshold of 37 polyQ repeats under which little or no disease exists; and above which, patients invariably show symptoms of HD. The huntingtin protein is a 350 kDa protein with unclear function. As the polyQ stretch expands, its propensity to aggregate increases with polyQ length. Models for polyQ toxicity include formation of aggregates that recruit and sequester essential cellular proteins, or altered function producing improper interactions between mutant huntingtin and other proteins. In both models, soluble expanded polyQ may be an intermediate state that can be targeted by potential therapeutics.
In the first study described herein, the conformation of soluble, expanded polyQ was determined to be linear and extended using equilibrium gel filtration and small-angle X-ray scattering. While attempts to purify and crystallize domains of the huntingtin protein were unsuccessful, the aggregation of huntingtin exon 1 was investigated using other biochemical techniques including dynamic light scattering, turbidity analysis, Congo red staining, and thioflavin T fluorescence. Chapter 4 describes crystallization experiments sent to the International Space Station and determination of the X-ray crystal structure of the anti-polyQ Fab MW1. In the final study, multimeric fibronectin type III (FN3) domain proteins were engineered to bind with high avidity to expanded polyQ tracts in mutant huntingtin exon 1. Surface plasmon resonance was used to observe binding of monomeric and multimeric FN3 proteins with huntingtin.
Resumo:
Part I. The cellular slime mold Dictyostelium discoideum is a simple eukaryote which undergoes a multi-cellular developmental process. Single cell myxamoebae divide vegetatively in the presence of a food source. When the food is depleted or removed, the cells aggregate, forming a migrating pseudoplasmodium which differentiates into a fruiting body containing stalk and spore cells. I have shown that during the developmental cycle glycogen phosphorylase, aminopeptidase, and alanine transaminase are developmentally regulated, that is their specific activities increased at a specific time in the developmental cycle. Phosphorylase activity is undetectable in developing cells until mid-aggregation whereupon it increases and reaches a maximum at mid-culmination. Thereafter the enzyme disappears. Actinomycin D and cycloheximide studies as well as studies with morphologically aberrant and temporally deranged mutants indicate that prior RNA and concomitant protein synthesis are necessary for the rise and decrease in activity and support the view that the appearance of the enzyme is regulated at the transcriptional level. Aminopeptidase and alanine transaminase increase 3 fold starting at starvation and reach maximum activity at 18 and 5 hours respectively.
The cellular DNA s of D. discoideum were characterized by CsC1 buoyant density gradient centrifugation and by renaturation kinetics. Whole cell DNA exhibits three bands in CsCl: ρ = 1.676 g/cc (nuclear main band), 1.687 (nuclear satellite), and 1.682 (mitochondrial). Reassociation kinetics at a criterion of Tm -23°C indicates that the nuclear reiterated sequences make up 30% of the genome (Cot1/2 (pure) 0.28) and the single-copy DNA 70% (Cot1/2(pure) 70). The complexity of the nuclear genome is 30 x 109 daltons and that of the mitochondrial DNA is 35-40 x 106 daltons (Cot1/2 0.15). rRNA cistrons constitute 2.2% of nuclear DNA and have a ρ = 1.682.
RNA extracted from 4 stages during developmental cycle of Dictyostelium was hybridized with purified single-copy nuclear DNA. The hybrids had properties indicative of single-copy DNA-RNA hybrids. These studies indicate that there are, during development, qualitative and quantitative changes in the portion of the single-copy of the genome transcribed. Overall, 56% of the genome is represented by transcripts between the amoeba and mid-culmination stages. Some 19% are sequences which are represented at all stages while 37% of the genome consists of stage specific sequences.
Part II. RNA and protein synthesis and polysome formation were studied during early development of the surf clam Spisula solidissima embryos. The oocyte has a small number of polysomes and a low but measurable rate of protein synthesis (leucine-3H incorporation). After fertilization, there is a continual increase in the percentage of ribosomes sedimenting in the polysome region. Newly synthesized RNA (uridine-5-3H incorporation) was found in polysomes as early as the 2-cell stage. During cleavage, the newly formed RNA is associated mainly with the light polysomes.
RNA extracted from polysomes labeled at the 4-cell stage is polydisperse, nonribosomal, and non-4 S. Actinomycin D causes a reduction of about 30% of the polysomes formed between fertilization and the 16-cell stage.
In the early cleavage stages the light polysomes are mostly affected by actinomycin.
Resumo:
The major nonhistone chromosomal proteins (NHC proteins) are a group of 14-20 acidic proteins associated with DNA in eukaryotic chromatin. In comparisons by SDS gel electrophoresis (molecular weight sieving) one observes a high degree of homology among the NHC protein fractions of different tissues from a given species. Tissue-specific protein bands are also observed. The appearance of a new NHC protein, A, in the NHC proteins of rat liver stimulated to divide by partial hepatectomy and of rat ascites cells suggests that this protein may play a role in preparing the cell for division. The NHC proteins of the same tissue from different species are also very similar. Quantitative but not qualitative changes in the NHC proteins of rat uterus are observed on stimulation (in vivo) with estrogen. These observations suggest that the major NHC proteins play a general role in chromatin structure and the regulation of genome expression; several may be enzymes of nucleic acid and histone metabolism and/or structural proteins analogous to histones. One such enzyme, a protease which readily and preferentially degrades histones, can be extracted from chromatin with 0.7 N NaCl.
Although the NHC proteins readily aggregate, they can be separated from histone and fractionated by ion exchange chromatography on Sephadex SE C-25 resin in 10 M urea-25% formic acid (pH 2.5). Following further purification, four fractions of NHC protein are obtained; two of these are single purified proteins, and the other two contain 4-6 and 4-7 different proteins. These NHC proteins show a ratio of acidic to basic amino acids from 2.7 to 1.2 and isoelectric points from apparently less than 3.7 to 8.0. These isolated fractions appear more soluble and easier to work with than any whole NHC protein preparation.