230 resultados para honors
Resumo:
Tropical Storm Lee produced 25-36 cm of rainfall in north-central Pennsylvania on September 4th through 8th of 2011. Loyalsock Creek, Muncy Creek, and Fishing Creek experienced catastrophic flooding resulting in new channel formation, bank erosion, scour of chutes, deposition/reworking of point bars and chute bars, and reactivation of the floodplain. This study was created to investigate aspects of both geomorphology and sedimentology by studying the well-exposed gravel deposits left by the flood, before these features are removed by humans or covered by vegetation. By recording the composition of gravel bars in the study area and creating lithofacies models, it is possible to understand the 2011 flooding. Surficial clasts on gravel bars are imbricated, but the lack of imbrication and high matrix content of sediments at depth suggests that surface imbrication of the largest clasts took place during hyperconcentrated flow (40-70% sediment concentration). The imbricated clasts on the surface are the largest observed within the bars. The lithofacies recorded are atypical for mixed-load stream lithofacies and more similar to glacial outburst flood lithofacies. This paper suggests that the accepted lithofacies model for mixed-load streams with gravel bedload may not always be useful for interpreting depositional systems. A flume study, which attempted to duplicate the stratigraphy recorded in the field, was run in order to better understand hyperconcentrated flows in the study area. Results from the study in the Bucknell Geology Flume Laboratory indicate that surficial imbrication is possible in hyperconcentrated conditions. After flooding the flume to entrain large amounts of sand and gravel, deposition of surficially imbricated gravel with massive or upward coarsening sedimentology occurred. Imbrication was not observed at depth. These experimental flume deposits support our interpretation of the lithofacies discovered in the field. The sizes of surficial gravel bar clasts show clear differences between chute and point bars. On point bars, gravels fine with increasing distance from the channel. Fining also occurs at the downstream end of point bars. In chute deposits, dramatic fining occurs down the axis of the chute, and lateral grain sizes are nearly uniform. Measuring the largest grain size of sandstone clasts at 8-11 kilometer intervals on each river reveals anomalies in the downstream fining trends. Gravel inputs from bedrock outcrops, tributaries, and erosion of Pleistocene outwash terraces may explain observed variations in grain size along streams either incised into the Appalachian Plateau or located near the Wisconsinan glacial boundary. Atomic Mass Spectrometry (AMS) radiocarbon dating of sediment from recently scoured features on Muncy Creek and Loyalsock Creek returned respective ages of 500 BP and 2490 BP. These dates suggest that the recurrence interval of the 2011 flooding may be several hundred to several thousand years. This geomorphic interval of recurrence is much longer then the 120 year interval calculated by the USGS using historical stream gauge records.
Resumo:
Soybean lipoxygenase-1 (SBLO-1) catalyzes the oxygenation of polyunsaturated fatty acids into conjugated diene hydroperoxides. The three dimensional structure of SBLO-1 is known, but it is not certain how substrates bind. One hypothesis involves the transient separation of helix-2 and helix-11 located on the exterior of the molecule in front of the active site iron. A second hypothesis involves a conformational change in the side chains of residues leucine 541 and threonine 259. To test these hypotheses, site directed mutagenesis was used to create a cysteine mutation on each helix, which could allow for the formation of a disulfide linkage. Disulfide formation between the two cysteines in the T259C,S545C mutant was found to be unfavorable, but later shown to be present at higher pH values using SDS-PAGE. Treatment of the T259C,S545C with the crosslinker 2,3-dibromomaleimide (DBM) resulted in a 50% reduction in catalytic activity. No loss of activity was observed when the single mutant, S545C, or the wild type was treated with DBM. Single mutants T259C and L541C both showed approximately 20% reduction in the rate after addition of DBM. Double mutants T259C,L541C and S263C,S545C showed approximately 30% reduction in the rate after addition of DBM. Single mutants T259C and L541C showed an increase in activity after incubation with NEM. Double mutants T259C,S545C and T259C,L541C showed an increase in activity after incubation with NEM. The S263C,S545C double mutant showed a slight decrease in activity in the presence of NEM. It is unclear how the NEM and DBM are interacting with the molecule, but this can easily be determined through mass spectrometry experiments.
Resumo:
In my thesis, I use literary criticism, knowledge of Russian, and elements of translation theory to study the seminal poet of the Russian literary tradition ¿ Aleksandr Pushkin. In his most famous work, Eugene Onegin, Pushkin explores the cultural and linguistic divide in place at the turn of the 19th century in Russia. Pushkin stands on the peripheries of several colliding worlds; never fully committing to any of them, he acts as a translator between various realms of the 19th-century Russian experience. Through his narrator, he adeptly occupies the voices, styles, and modes of expression of various characters, displaying competency in all realms of Russian life. In examining Tatiana, his heroine, the reader witnesses her development as analogous to the author¿s. At the center of the text stands the act of translation itself: as the narrator ¿translates¿ Tatiana¿s love letter from French to Russian, the author-narrator declares his function as a mediator, not only between languages, but also between cultures, literary canons, social classes, and identities. Tatiana, as both main character and the narrator¿s muse, emerges as the most complex figure in the novel, and her language manifests itself as the most direct and capable of sincerity in the novel. The elements of Russian folklore that are incorporated into her language speak to Pushkin¿s appreciation for the rich Russian folklore tradition. In his exaltation of language considered to be ¿common¿, ¿low¿ speech is juxtaposed with its lofty counterpart; along the way, he incorporates myriad foreign borrowings. An active creator of Russia¿s new literary language, Pushkin traverses linguistic boundaries to synthesize a fragmented Russia. In the process, he creates a work so thoroughly tied to language and entrenched in complex cultural traditions that many scholars have argued for its untranslatability.
Resumo:
As lightweight and slender structural elements are more frequently used in the design, large scale structures become more flexible and susceptible to excessive vibrations. To ensure the functionality of the structure, dynamic properties of the occupied structure need to be estimated during the design phase. Traditional analysis method models occupants simply as an additional mass; however, research has shown that human occupants could be better modeled as an additional degree-of- freedom. In the United Kingdom, active and passive crowd models are proposed by the Joint Working Group as a result of a series of analytical and experimental research. It is expected that the crowd models would yield a more accurate estimation to the dynamic response of the occupied structure. However, experimental testing recently conducted through a graduate student project at Bucknell University indicated that the proposed passive crowd model might be inaccurate in representing the impact on the structure from the occupants. The objective of this study is to provide an assessment of the validity of the crowd models proposed by JWG through comparing the dynamic properties obtained from experimental testing data and analytical modeling results. The experimental data used in this study was collected by Firman in 2010. The analytical results were obtained by performing a time-history analysis on a finite element model of the occupied structure. The crowd models were created based on the recommendations from the JWG combined with the physical properties of the occupants during the experimental study. During this study, SAP2000 was used to create the finite element models and to implement the analysis; Matlab and ME¿scope were used to obtain the dynamic properties of the structure through processing the time-history analysis results from SAP2000. The result of this study indicates that the active crowd model could quite accurately represent the impact on the structure from occupants standing with bent knees while the passive crowd model could not properly simulate the dynamic response of the structure when occupants were standing straight or sitting on the structure. Future work related to this study involves improving the passive crowd model and evaluating the crowd models with full-scale structure models and operating data.
Resumo:
This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.
Resumo:
Biodegradable nanoparticles are at the forefront of drug delivery research as they provide numerous advantages over traditional drug delivery methods. An important factor affecting the ability of nanoparticles to circulate within the blood stream and interact with cells is their morphology. In this study a novel processing method, confined impinging jet mixing, was used to form poly (lactic acid) nanoparticles through a solvent-diffusion process with Pluronic F-127 being used as a stabilizing agent. This study focused on the effects of Reynolds number (flow rate), surfactant presence in mixing, and polymer concentration on the morphology of poly (lactic acid) nanoparticles. In addition to looking at the parameters affecting poly (lactic acid) morphology, this study attempted to improve nanoparticle isolation and purification methods to increase nanoparticle yield and ensure specific morphologies were not being excluded during isolation and purification. The isolation and purification methods used in this study were centrifugation and a stir cell. This study successfully produced particles having pyramidal and cubic morphologies. Despite successful production of these morphologies the yield of non-spherical particles was very low, additionally great variability existed between redundant trails. Surfactant was determined to be very important for the stabilization of nanoparticles in solution but appears to be unnecessary for the formation of nanoparticles. Isolation and purification methods that produce a high yield of surfactant free particles have still not been perfected and additional testing will be necessary for improvement.¿
Resumo:
Recent optimizations of NMR spectroscopy have focused their attention on innovations in new hardware, such as novel probes and higher field strengths. Only recently has the potential to enhance the sensitivity of NMR through data acquisition strategies been investigated. This thesis has focused on the practice of enhancing the signal-to-noise ratio (SNR) of NMR using non-uniform sampling (NUS). After first establishing the concept and exact theory of compounding sensitivity enhancements in multiple non-uniformly sampled indirect dimensions, a new result was derived that NUS enhances both SNR and resolution at any given signal evolution time. In contrast, uniform sampling alternately optimizes SNR (t < 1.26T2) or resolution (t~3T2), each at the expense of the other. Experiments were designed and conducted on a plant natural product to explore this behavior of NUS in which the SNR and resolution continue to improve as acquisition time increases. Possible absolute sensitivity improvements of 1.5 and 1.9 are possible in each indirect dimension for matched and 2x biased exponentially decaying sampling densities, respectively, at an acquisition time of ¿T2. Recommendations for breaking into the linear regime of maximum entropy (MaxEnt) are proposed. Furthermore, examination into a novel sinusoidal sampling density resulted in improved line shapes in MaxEnt reconstructions of NUS data and comparable enhancement to a matched exponential sampling density. The Absolute Sample Sensitivity derived and demonstrated here for NUS holds great promise in expanding the adoption of non-uniform sampling.
Resumo:
Solid oxide fuel cells (SOFCs) provide a potentially clean way of using energy sources. One important aspect of a functioning fuel cell is the anode and its characteristics (e.g. conductivity). Using infiltration of conductor particles has been shown to be a method for production at lower cost with comparable functionality. While these methods have been demonstrated experimentally, there is a vast range of variables to consider. Because of the long time for manufacture, a model is desired to aid in the development of the desired anode formulation. This thesis aims to (1) use an idealized system to determine the appropriate size and aspect ratio to determine the percolation threshold and effective conductivity as well as to (2) simulate the infiltrated fabrication method to determine the effective conductivity and percolation threshold as a function of ceramic and pore former particle size, particle fraction and the cell¿s final porosity. The idealized system found that the aspect ratio of the cell does not affect the cells functionality and that an aspect ratio of 1 is the most efficient computationally to use. Additionally, at cell sizes greater than 50x50, the conductivity asymptotes to a constant value. Through the infiltrated model simulations, it was found that by increasing the size of the ceramic (YSZ) and pore former particles, the percolation threshold can be decreased and the effective conductivity at low loadings can be increased. Furthermore, by decreasing the porosity of the cell, the percolation threshold and effective conductivity at low loadings can also be increased
Resumo:
The goal of this paper is to contribute to the understanding of complex polynomials and Blaschke products, two very important function classes in mathematics. For a polynomial, $f,$ of degree $n,$ we study when it is possible to write $f$ as a composition $f=g\circ h$, where $g$ and $h$ are polynomials, each of degree less than $n.$ A polynomial is defined to be \emph{decomposable }if such an $h$ and $g$ exist, and a polynomial is said to be \emph{indecomposable} if no such $h$ and $g$ exist. We apply the results of Rickards in \cite{key-2}. We show that $$C_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,(z-z_{1})(z-z_{2})...(z-z_{n})\,\mbox{is decomposable}\},$$ has measure $0$ when considered a subset of $\mathbb{R}^{2n}.$ Using this we prove the stronger result that $$D_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,\mbox{There exists\,}a\in\mathbb{C}\,\,\mbox{with}\,\,(z-z_{1})(z-z_{2})...(z-z_{n})(z-a)\,\mbox{decomposable}\},$$ also has measure zero when considered a subset of $\mathbb{R}^{2n}.$ We show that for any polynomial $p$, there exists an $a\in\mathbb{C}$ such that $p(z)(z-a)$ is indecomposable, and we also examine the case of $D_{5}$ in detail. The main work of this paper studies finite Blaschke products, analytic functions on $\overline{\mathbb{D}}$ that map $\partial\mathbb{D}$ to $\partial\mathbb{D}.$ In analogy with polynomials, we discuss when a degree $n$ Blaschke product, $B,$ can be written as a composition $C\circ D$, where $C$ and $D$ are finite Blaschke products, each of degree less than $n.$ Decomposable and indecomposable are defined analogously. Our main results are divided into two sections. First, we equate a condition on the zeros of the Blaschke product with the existence of a decomposition where the right-hand factor, $D,$ has degree $2.$ We also equate decomposability of a Blaschke product, $B,$ with the existence of a Poncelet curve, whose foci are a subset of the zeros of $B,$ such that the Poncelet curve satisfies certain tangency conditions. This result is hard to apply in general, but has a very nice geometric interpretation when we desire a composition where the right-hand factor is degree 2 or 3. Our second section of finite Blaschke product results builds off of the work of Cowen in \cite{key-3}. For a finite Blaschke product $B,$ Cowen defines the so-called monodromy group, $G_{B},$ of the finite Blaschke product. He then equates the decomposability of a finite Blaschke product, $B,$ with the existence of a nontrivial partition, $\mathcal{P},$ of the branches of $B^{-1}(z),$ such that $G_{B}$ respects $\mathcal{P}$. We present an in-depth analysis of how to calculate $G_{B}$, extending Cowen's description. These methods allow us to equate the existence of a decomposition where the left-hand factor has degree 2, with a simple condition on the critical points of the Blaschke product. In addition we are able to put a condition of the structure of $G_{B}$ for any decomposable Blaschke product satisfying certain normalization conditions. The final section of this paper discusses how one can put the results of the paper into practice to determine, if a particular Blaschke product is decomposable. We compare three major algorithms. The first is a brute force technique where one searches through the zero set of $B$ for subsets which could be the zero set of $D$, exhaustively searching for a successful decomposition $B(z)=C(D(z)).$ The second algorithm involves simply examining the cardinality of the image, under $B,$ of the set of critical points of $B.$ For a degree $n$ Blaschke product, $B,$ if this cardinality is greater than $\frac{n}{2}$, the Blaschke product is indecomposable. The final algorithm attempts to apply the geometric interpretation of decomposability given by our theorem concerning the existence of a particular Poncelet curve. The final two algorithms can be implemented easily with the use of an HTML
Resumo:
Angiotensin II (Ang II), a key protein in the renin-angiotensin system, can induce cardiac hypertrophy through an intracrine system as well as affect gene transcription. The receptor to Ang II responsible for this effect, AT1, has been localized to the nucleus of cell types in addition to cardiomyocytes. In this study, we induced expression of Ang II in MC3T3 osteoblasts and K7M2 osteosarcomas and measured changes in protein expression of Annexin V and matrix metalloproteinase 2 (MMP2), proteins identified previously through mass spectrometry analysis as being regulated by Ang II. Annexin V is downregulated in both immortalized murine bone (MC3T3) cells and in cancerous immortalized murine (K7M2) cells induced to express Ang II. MC3T3 cells which express Ang II show a downregulation of MMP2 expression, but Ang II-expressing K7M2 cells show an upregulation of MMP2. The differential regulation of MMP2 between the cancerous cells and noncancerous cells implicates a role for Ang in in tumor metastasis, as MMP2 is a metastatic protein. Annexin V is used as a marker for apoptosis, but nothing is known of the function of the endogenous protein. That Annexin V is potentially regulated by Ang II provides more information with which to characterize the protein and could suggest a function for Annexin V as part of a signal transduction pathway inside of the cell.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) streamlines the de- sign and execution of experiments with the ns-3 network simulator. SAFE ensures that best practices are followed throughout the workflow a network simulation study, guaranteeing that results are both credible and reproducible by third parties. Data analysis is a crucial part of this workflow, where mistakes are often made. Even when appearing in highly regarded venues, scientific graphics in numerous network simulation publications fail to include graphic titles, units, legends, and confidence intervals. After studying the literature in network simulation methodology and in- formation graphics visualization, I developed a visualization component for SAFE to help users avoid these errors in their scientific workflow. The functionality of this new component includes support for interactive visualization through a web-based interface and for the generation of high-quality, static plots that can be included in publications. The overarching goal of my contribution is to help users create graphics that follow best practices in visualization and thereby succeed in conveying the right information about simulation results.
Resumo:
Among the philosophical ideas of Plato, perhaps the most famous is his doctrine of forms. This doctrine has faced harsh criticism due, in large part, to the interpretations of this position by modern philosophers such as René Descartes, John Locke, and Immanuel Kant. For example, Plato has been interpreted as presenting a ¿two-worlds¿ approach to form and thing and as advancing a rationalist approach to epistemology. His forms have often been interpreted as ideas and as perfect copies of the things of the visible world. In this thesis, I argue that these, along with other interpretations of Plato presented by the moderns, are based on misunderstandings of Plato¿s overall philosophy. In so doing, I attempt to show that the doctrine of forms cannot be directly interpreted into the language of Cartesian, Lockean, and Kantian metaphysics and epistemology, and thus should not be prematurely dismissed because of these modern Platonic interpretations. By analyzing the Platonic dialogues beside the writings of the modern philosophers, I conclude that three of the most prominent modern philosophers, as representatives of their respective philosophical frameworks, have fundamentally misunderstood the nature of Plato¿s famous doctrine of forms. This could have significant implications for the future of metaphysics and epistemology by providing an interpretation of Plato which adds to, instead of contradicts, the developments of modern philosophy.
Resumo:
In my thesis, I incorporate both psychological research and personal narratives in order to explain why, in the aftermath of the Vietnam War, the United States officially recognized Post-Traumatic Stress Disorder while the Vietnamese government did not. The absence of Vietnamese studies on the impact of PTSD on veterans, in comparison to the abundance of research collected on American soldiers, is reflective not of a disparity in the actual prevalence of the disorder, but of the influence of political policy on the scope of Vietnamese psychology. Personal narratives from Vietnamese civilians and soldiers thus reveal accounts of trauma otherwise hidden due to the absence of Vietnamese psychological research. Although these two nations conspicuously differed in their respective responses to the prevalence of psychological trauma in war veterans, these responses demonstrated that both the recognition and rejection of PTSD was a result of sociopolitical factors: political ideologies, rather than scientific reasons, dictated whether the postwar trajectory of psychological research focused on fully exploring the impact of PTSD on veteran populations. The association of military defeat with psychological trauma thus fixed attention on certain groups of veterans, including former American and South Vietnamese soldiers, while ignoring the impact of trauma on veterans of the Viet Cong and North Vietnamese Army. The correlation of a soldier¿s ideological background with psychological trauma, rather than exposure to actual traumatic experiences, demonstrates that cultural and sociopolitical factors are far more influential in the construction of PTSD than objective indicators of the disorder¿s prevalence. Culturally-constructed responses to disorders such as PTSD therefore account for the subjective treatment of mental illness. The American and Vietnamese responses to veterans suffering from PTSD both demonstrated that the evidence of mental health problems in an individual does not guarantee an immediate or appropriate diagnosis and treatment regimen. External authorities whose primary aims are not necessarily concerned with the objective treatment of all victims of mental illness subjectively dictate mental health care policy, and therefore risk ignoring or marginalizing the needs of individuals in need of proper treatment.
Resumo:
Aerosols are known to have important effects on climate, the atmosphere, and human health. The extent of those effects is unknown and largely depend on the interaction of aerosols with water in the atmosphere. Ambient aerosols are complex mixtures of both inorganic and organic compounds. The cloud condensation nuclei (CCN) activities, hygroscopic behavior and particle morphology of a monocarboxylic amino acid (leucine) and a dicarboxylic amino acid (glutamic acid) were investigated. Activation diameters at various supersaturation conditions were experimentally determined and compared with Köhler theoretical values. The theory accounts for both surface tension and the limited solubility of organic compounds. It was discovered that glutamic acid aerosols readily took on water both when relative humidity was less than 100% and when the supersaturation condition was reached, while leucine did not show any water activation at those conditions. Moreover, the study also suggests that Köhler theory describes CCN activity of organic compounds well when only surface tension of the compound is taken into account and complete solubility is assumed. Single parameter ¿ was also computed using both CCN data and hygroscopic growth factor (GF). The results of ¿ range from 0.17 to 0.53 using CCN data and 0.09 to 0.2 using GFs. Finally, the study suggests that during the water-evaporation/particle-nucleation process, crystallization from solution droplets takes place at different locations: for glutamic acid at the particles¿ center and leucine at the particles¿ boundary.
Resumo:
In my thesis I use a historical approach to close readings of fairy tale texts and movies to study the evolution of fathers, daughters, and marriage within three landmark tales: ¿Cinderella,¿ ¿Sleeping Beauty,¿ and ¿Snow White.¿ Using the works of Giambattista Basile, Charles Perrault, Jacob and Wilhelm Grimm, and Walt Disney and his cohort of animators, I trace the historical trajectory of these three elements, analyzing both the ways they change and develop as history progresses as well as the ways they remain consistent. Through close and comparative readings of primary sources and films, I demonstrate the power structures and familial dynamics evident through the interactions of fathers and daughters. Specifically, I show that through the weakness and ineptitude of fairy tale fathers, fairy tale daughters are able to gain power, authority, and autonomy by using magic and marriage to navigate patriarchal systems. The work I have done is important because it explores how each tale is a product of the story before it and thus that in order for these tales to continue to survive the test of time, we must not only recognize the validity of the academic merit of the Disney stories, but also remember them and others as we forge new paths in the stories we use to teach both children and parents. Specifically, this work is important because it explores the historical trend evident in the evolving relationships between fathers and daughters. This relationship ultimately it reveals the deep underlying need for family within all of us.