860 resultados para college park


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation is concerned with experiencer arguments, and what they tell us about the grammar. There are two main types of experiencers I discuss: experiencers of psychological verbs and experiencers of raising constructions. I question the notion of ‘experiencers’ itself; and explore some possible accounts for the ‘psych-effects’. I argue that the ‘experiencer theta role’ is conceptually unnecessary and unsustained by syntactic evidence. ‘Experiencers’ can be reduced to different types of arguments. Taking Brazilian Portuguese as my main case study, I claim that languages may grammaticalize psychological predicates and their arguments in different ways. These verb classes exist in languages independently, and the psych-verbs behavior can be explained by the argument structure of the verbal class they belong to. I further discuss experiencers in raising structures, and the defective intervention effects triggered by different types of experiencers (e.g., DPs, PPs, clitics, traces) in a variety of languages. I show that defective intervention is mostly predictable across languages, and there’s not much variation regarding its effects. Moreover, I argue that defective intervention can be captured by a notion of minimality that requires interveners to be syntactic objects and not syntactic occurrences (a chain, and not a copy/trace). The main observation is that once a chain is no longer in the c-command domain of a probe, defective intervention is obviated, i.e., it doesn’t apply. I propose a revised version of the Minimal Link Condition (1995), in which only syntactic objects may intervene in syntactic relations, and not copies. This view of minimality can explain the core cases of defective intervention crosslinguistically.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent legislation and initiatives set forth high academic expectations for all high school graduates in the area of reading (National Governors Association Center for Best Practices, 2010; Every Student Succeeds Act, 2015). To determine which students need additional support to meet these reading standards, teachers can conduct universal screening using formative assessments. Maze Curriculum-Based Measurement (Maze-CBM) is a commonly used screening and progress monitoring assessment that the National Center on Intensive Intervention (2013) and the Center on Instruction (Torgesen & Miller, 2009) recommend. Despite the recommendation to use Maze-CBM, little research has been conducted on the reliability and validity of Maze-CBM for measuring reading ability for students at the secondary level (Mitchell & Wexler, 2016). In the papers included in this dissertation, I present an initial investigation into the use of Maze-CBM for secondary students. In the first paper, I investigated prior studies of Maze-CBM for students in Grades 6 through 12. Next, in the second paper, I investigated the alternate-form reliability and validity for screening students in Grades 9 and 10 using signal detection theory methods. In the third paper, I examined the effect of genre on Maze-CBM scores with a sample of students in Grades 9 and 10 using multilevel modeling. When writing these three papers, I discovered several important findings related to Maze-CBM. First, there are few studies that have investigated the technical adequacy of Maze-CBM for screening and progress monitoring students in Grades 6 through 12. Additionally, only two studies (McMaster, Wayman, & Cao, 2006; Pierce, McMaster, & Deno, 2010) examined the technical adequacy of Maze-CBM for high school students. A second finding is that the reliability of Maze-CBM is often below acceptable levels for making screening decisions or progress monitoring decisions (.80 and above and .90 and above, respectively; Salvia, Ysseldyke, & Bolt, 2007) for secondary students. A third finding is that Maze-CBM scores show promise of being a valid screening tool for reading ability of secondary students. Finally, I found that the genre of the text used in the Maze-CBM assessment does impact scores on Maze-CBM for students in Grades 9 and 10.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The high rate of teacher attrition in urban schools is well documented. While this does not seem like a problem in Carter County, this equates to hundreds of teachers that need to be replaced annually. Since school year (SY) 2007-08, Carter County has lost over 7,100 teachers, approximately half of (50.1%) of whom resigned, often going to neighboring, higher-paying jurisdictions as suggested by exit survey data (SY2016-2020 Strategic Plan). Included in this study is a range of practices principals use to retain teachers. While the role of the principal is recognized as a critical element in teacher retention, few studies explore the specific practices principals implement to retain teachers and how they use their time to accomplish this task. Through interviews, observations, document analysis and reflective notes, the study identifies the practices four elementary school principals of high and relatively low attrition schools use to support teacher retention. In doing so, the study uses a qualitative cross-case analysis approach. The researcher examined the following leadership practices of the principal and their impact on teacher retention: (a) providing leadership, (b) supporting new teachers, (c) training and mentoring teaching staff, (d) creating opportunities for collaboration, (d) creating a positive school climate, and (e) promoting teacher autonomy. The following research questions served as a foundational guide for the development and implementation of this study: 1. How do principals prioritize addressing teacher attrition or retention relative to all of their other responsibilities? How do they allocate their time to this challenge? 2. What do principals in schools with low attrition rates do to promote retention that principals in high attrition schools do not? What specific practices or interventions are principals in these two types of schools utilizing to retain teachers? Is there evidence to support their use of the practices? The findings that emerge from the data revealed the various practices principals use to influence and support teachers do not differ between the four schools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, we will explore approaches to faculty instructional change in astronomy and physics. We primarily focus on professional development (PD) workshops, which are a central mechanism used within our community to help faculty improve their teaching. Although workshops serve a critical role for promoting more equitable instruction, we rarely assess them through careful consideration of how they engage faculty. To encourage a shift towards more reflective, research-informed PD, we developed the Real-Time Professional Development Observation Tool (R-PDOT), to document the form and focus of faculty's engagement during workshops. We then analyze video-recordings of faculty's interactions during the Physics and Astronomy New Faculty Workshop, focusing on instances where faculty might engage in pedagogical sense-making. Finally, we consider insights gained from our own local, team-based effort to improve a course sequence for astronomy majors. We conclude with recommendations for PD leaders and researchers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cnidarians are often considered simple animals, but the more than 13,000 estimated species (e.g., corals, hydroids and jellyfish) of the early diverging phylum exhibit a broad diversity of forms, functions and behaviors, some of which are demonstrably complex. In particular, cubozoans (box jellyfish) are cnidarians that have evolved a number of distinguishing features. Some cubozoan species possess complex mating behaviors or particularly potent stings, and all possess well-developed light sensation involving image-forming eyes. Like all cnidarians, cubozoans have specialized subcellular structures called nematocysts that are used in prey capture and defense. The objective of this study is to contribute to the development of the box jellyfish Alatina alata as a model cnidarian. This cubozoan species offers numerous advantages for investigating morphological and molecular traits underlying complex processes and coordinated behavior in free-living medusozoans (i.e., jellyfish), and more broadly throughout Metazoa. First, I provide an overview of Cnidaria with an emphasis on the current understanding of genes and proteins implicated in complex biological processes in a few select cnidarians. Second, to further develop resources for A. alata, I provide a formal redescription of this cubozoan and establish a neotype specimen voucher, which serve to stabilize the taxonomy of the species. Third, I generate the first functionally annotated transcriptome of adult and larval A. alata tissue and apply preliminary differential expression analyses to identify candidate genes implicated broadly in biological processes related to prey capture and defense, vision and the phototransduction pathway and sexual reproduction and gametogenesis. Fourth, to better understand venom diversity and mechanisms controlling venom synthesis in A. alata, I use bioinformatics to investigate gene candidates with dual roles in venom and digestion, and review the biology of prey capture and digestion in cubozoans. The morphological and molecular resources presented herein contribute to understanding the evolution of cubozoan characteristics and serve to facilitate further research on this emerging cubozoan model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interaction of rocks with fluids can significantly change mineral assemblage and structure. This so-called hydrothermal alteration is ubiquitous in the Earth’s crust. Though the behavior of hydrothermally altered rocks can have planet-scale consequences, such as facilitating oceanic spreading along slow ridge segments and recycling volatiles into the mantle at subduction zones, the mechanisms involved in the hydrothermal alteration are often microscopic. Fluid-rock interactions take place where the fluid and rock meet. Fluid distribution, flux rate and reactive surface area control the efficiency and extent of hydrothermal alteration. Fluid-rock interactions, such as dissolution, precipitation and fluid mediated fracture and frictional sliding lead to changes in porosity and pore structure that feed back into the hydraulic and mechanical behavior of the bulk rock. Examining the nature of this highly coupled system involves coordinating observations of the mineralogy and structure of naturally altered rocks and laboratory investigation of the fine scale mechanisms of transformation under controlled conditions. In this study, I focus on fluid-rock interactions involving two common lithologies, carbonates and ultramafics, in order to elucidate the coupling between mechanical, hydraulic and chemical processes in these rocks. I perform constant strain-rate triaxial deformation and constant-stress creep tests on several suites of samples while monitoring the evolution of sample strain, permeability and physical properties. Subsequent microstructures are analyzed using optical and scanning electron microscopy. This work yields laboratory-based constraints on the extent and mechanisms of water weakening in carbonates and carbonation reactions in ultramafic rocks. I find that inundation with pore fluid thereby reducing permeability. This effect is sensitive to pore fluid saturation with respect to calcium carbonate. Fluid inundation weakens dunites as well. The addition of carbon dioxide to pore fluid enhances compaction and partial recovery of strength compared to pure water samples. Enhanced compaction in CO2-rich fluid samples is not accompanied by enhanced permeability reduction. Analysis of sample microstructures indicates that precipitation of carbonates along fracture surfaces is responsible for the partial restrengthening and channelized dissolution of olivine is responsible for permeability maintenance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation describes two studies on macroeconomic trends and cycles. The first chapter studies the impact of Information Technology (IT) on the U.S. labor market. Over the past 30 years, employment and income shares of routine-intensive occupations have declined significantly relative to nonroutine occupations, and the overall U.S. labor income share has declined relative to capital. Furthermore, the decline of routine employment has been largely concentrated during recessions and ensuing recoveries. I build a model of unbalanced growth to assess the role of computerization and IT in driving these labor market trends and cycles. I augment a neoclassical growth model with exogenous IT progress as a form of Routine-Biased Technological Change (RBTC). I show analytically that RBTC causes the overall labor income share to follow a U-shaped time path, as the monotonic decline of routine labor share is increasingly offset by the monotonic rise of nonroutine labor share and the elasticity of substitution between the overall labor and capital declines under IT progress. Quantitatively, the model explains nearly all the divergence between routine and nonroutine labor in the period 1986-2014, as well as the mild decline of the overall labor share between 1986 and the early 2000s. However, the model with IT progress alone cannot explain the accelerated decline of labor income share after the early 2000s, suggesting that other factors, such as globalization, may have played a larger role in this period. Lastly, when nonconvex labor adjustment costs are present, the model generates a stepwise decline in routine labor hours, qualitatively consistent with the data. The timing of these trend adjustments can be significantly affected by aggregate productivity shocks and concentrated in recessions. The second chapter studies the implications of loss aversion on the business cycle dynamics of aggregate consumption and labor hours. Loss aversion refers to the fact that people are distinctively more sensitive to losses than to gains. Loss averse agents are very risk averse around the reference point and exhibit asymmetric responses to positive and negative income shocks. In an otherwise standard Real Business Cycle (RBC) model, I study loss aversion in both consumption alone and consumption-and-leisure together. My results indicate that how loss aversion affects business cycle dynamics depends critically on the nature of the reference point. If, for example, the reference point is status quo, loss aversion dramatically lowers the effective inter-temporal rate of substitution and induces excessive consumption smoothing. In contrast, if the reference point is fixed at a constant level, loss aversion generates a flat region in the decision rules and asymmetric impulse responses to technology shocks. Under a reasonable parametrization, loss aversion has the potential to generate asymmetric business cycles with deeper and more prolonged recessions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Placement of students with disabilities in private special-education schools remains costly and controversial. This is particularly concerning, given the lack of research on the characteristics and quality of these restrictive settings. The purpose of this study was to identify the academic and vocational course offerings and behavioral supports provided in private special-education schools the serve high school students with emotional disabilities (ED). Second, the research examined the perceptions of the quality of services in these setting from the perspectives of public school case managers. Using a mixed-method design to collect data, 9 administrative heads of private special-education schools were surveyed, and 7 public school case managers were interviewed. Results indicated that (a) private special-education schools offer the basic academic core courses needed to meet graduation requirements, (b) vocational options for students enrolled in these schools are quite limited, (c) these schools provide a variety of behavioral interventions and supports, and (d) case managers are concerned with the lack of academic rigor and inconsistent programming at these schools but applauded the notion that students with ED are exiting with a high school diploma. Findings from this study may have policy implications for improving and developing programming options for high school students with ED.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A pressing challenge for the study of animal ethics in early modern literature is the very breadth of the category “animal,” which occludes the distinct ecological and economic roles of different species. Understanding the significance of deer to a hunter as distinct from the meaning of swine for a London pork vendor requires a historical investigation into humans’ ecological and cultural relationships with individual animals. For the constituents of England’s agricultural networks – shepherds, butchers, fishwives, eaters at tables high and low – animals matter differently. While recent scholarship on food and animal ethics often emphasizes ecological reciprocation, I insist that this mutualism is always out of balance, both across and within species lines. Focusing on drama by William Shakespeare, Ben Jonson, and the anonymous authors of late medieval biblical plays, my research investigates how sixteenth-century theaters use food animals to mediate and negotiate the complexities of a changing meat economy. On the English stage, playwrights use food animals to impress the ethico-political implications of land enclosure, forest emparkment, the search for new fisheries, and air and water pollution from urban slaughterhouses and markets. Concurrent developments in animal husbandry and theatrical production in the period thus led to new ideas about emplacement, embodiment, and the ethics of interspecies interdependence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dinoflagellates possess large genomes in which most genes are present in many copies. This has made studies of their genomic organization and phylogenetics challenging. Recent advances in sequencing technology have made deep sequencing of dinoflagellate transcriptomes feasible. This dissertation investigates the genomic organization of dinoflagellates to better understand the challenges of assembling dinoflagellate transcriptomic and genomic data from short read sequencing methods, and develops new techniques that utilize deep sequencing data to identify orthologous genes across a diverse set of taxa. To better understand the genomic organization of dinoflagellates, a genomic cosmid clone of the tandemly repeated gene Alchohol Dehydrogenase (AHD) was sequenced and analyzed. The organization of this clone was found to be counter to prevailing hypotheses of genomic organization in dinoflagellates. Further, a new non-canonical splicing motif was described that could greatly improve the automated modeling and annotation of genomic data. A custom phylogenetic marker discovery pipeline, incorporating methods that leverage the statistical power of large data sets was written. A case study on Stramenopiles was undertaken to test the utility in resolving relationships between known groups as well as the phylogenetic affinity of seven unknown taxa. The pipeline generated a set of 373 genes useful as phylogenetic markers that successfully resolved relationships among the major groups of Stramenopiles, and placed all unknown taxa on the tree with strong bootstrap support. This pipeline was then used to discover 668 genes useful as phylogenetic markers in dinoflagellates. Phylogenetic analysis of 58 dinoflagellates, using this set of markers, produced a phylogeny with good support of all branches. The Suessiales were found to be sister to the Peridinales. The Prorocentrales formed a monophyletic group with the Dinophysiales that was sister to the Gonyaulacales. The Gymnodinales was found to be paraphyletic, forming three monophyletic groups. While this pipeline was used to find phylogenetic markers, it will likely also be useful for finding orthologs of interest for other purposes, for the discovery of horizontally transferred genes, and for the separation of sequences in metagenomic data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The survival and descent of cells is universally dependent on maintaining their proteins in a properly folded condition. It is widely accepted that the information for the folding of the nascent polypeptide chain into a native protein is encrypted in the amino acid sequence, and the Nobel Laureate Christian Anfinsen was the first to demonstrate that a protein could spontaneously refold after complete unfolding. However, it became clear that the observed folding rates for many proteins were much slower than rates estimated in vivo. This led to the recognition of required protein-protein interactions that promote proper folding. A unique group of proteins, the molecular chaperones, are responsible for maintaining protein homeostasis during normal growth as well as stress conditions. Chaperonins (CPNs) are ubiquitous and essential chaperones. They form ATP-dependent, hollow complexes that encapsulate polypeptides in two back-to-back stacked multisubunit rings, facilitating protein folding through highly cooperative allosteric articulation. CPNs are usually classified into Group I and Group II. Here, I report the characterization of a novel CPN belonging to a third Group, recently discovered in bacteria. Group III CPNs have close phylogenetic association to the Group II CPNs found in Archaea and Eukarya, and may be a relic of the Last Common Ancestor of the CPN family. The gene encoding the Group III CPN from Carboxydothermus hydrogenoformans and Candidatus Desulforudis audaxviator was cloned in E. coli and overexpressed in order to both characterize the protein and to demonstrate its ability to function as an ATPase chaperone. The opening and closing cycle of the Chy chaperonin was examined via site-directed mutations affecting the ATP binding site at R155. To relate the mutational analysis to the structure of the CPN, the crystal structure of both the AMP-PNP (an ATP analogue) and ADP bound forms were obtained in collaboration with Sun-Shin Cha in Seoul, South Korea. The ADP and ATP binding site substitutions resulted in frozen forms of the structures in open and closed conformations. From this, mutants were designed to validate hypotheses regarding key ATP interacting sites as well as important stabilizing interactions, and to observe the physical properties of the resulting complexes by calorimetry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe the construction and characterization of a new apparatus that can produce degenerate quantum gases of strontium. The realization of degenerate gases is an important first step toward future studies of quantum magnetism. Three of the four stable isotopes of strontium have been cooled into the degenerate regime. The experiment can make nearly pure Bose-Einstein condensates containing approximately 1x10^4 atoms, for strontium-86, and approximately 4x10^5 atoms, for strontium-84. We have also created degenerate Fermi gases of strontium-87 with a reduced temperature, T/T_F of approximately 0.2. The apparatus will be able to produce Bose-Einstein condensates of strontium-88 with straightforward modifications. We also report the first experimental and theoretical results from the strontium project. We have developed a technique to accelerate the continuous loading of strontium atoms into a magnetic trap. By applying a laser addressing the 3P1 to 3S1 transition in our magneto-optical trap, the rate at which atoms populate the magnetically-trapped 3P2 state can be increased by up to 65%. Quantum degenerate gases of atoms in the metastable 3P0 and 3P2 states are a promising platform for quantum simulation of systems with long-range interactions. We have performed an initial numerical study of a method to transfer the ground state degenerate gases that we can currently produce into one of the metastable states via a three-photon transition. Numerical simulations of the Optical Bloch equations governing the three-photon transition indicate that >90% of a ground state degenerate gas can be transferred into a metastable state.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A diverse T cell receptor (TCR) repertoire is a prerequisite for effective viral clearance. However, knowledge of human TCR repertoire to defined viral antigens is limited. Recent advances in high-throughput sequencing (HTS) and single-cell sorting have revolutionized the study of human TCR repertoires to different types of viruses. In collaboration with the laboratory of Dr. Nan-ping Weng (National Institute on Aging, NIH), we applied unique molecular identifier (UMI)-labelled HTS, single-cell paired TCR analysis, surface plasmon resonance, and X-ray crystallography to exhaustively interrogate CD8+ TCR repertoires specific for cytomegalovirus (CMV) and influenza A (Flu) in HLA-A2+ humans. Our two CMV-specific TCR-pMHC structures and two Flu-specific TCR-pMHC structures provide a plausible explanation for the much higher diversity of CMV-specific than Flu-specific TCR repertoires in humans. Our comprehensive biochemical and structural portrait of two different anti-viral T cell responses may contribute to the future development of predictors of immunity or disease at the individual level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The origin of observed ultra-high energy cosmic rays (UHECRs, energies in excess of $10^{18.5}$ eV) remains unknown, as extragalactic magnetic fields deflect these charged particles from their true origin. Interactions of these UHECRs at their source would invariably produce high energy neutrinos. As these neutrinos are chargeless and nearly massless, their propagation through the universe is unimpeded and their detection can be correlated with the origin of UHECRs. Gamma-ray bursts (GRBs) are one of the few possible origins for UHECRs, observed as short, immensely bright outbursts of gamma-rays at cosmological distances. The energy density of GRBs in the universe is capable of explaining the measured UHECR flux, making them promising UHECR sources. Interactions between UHECRs and the prompt gamma-ray emission of a GRB would produce neutrinos that would be detected in coincidence with the GRB’s gamma-ray emission. The IceCube Neutrino Observatory can be used to search for these neutrinos in coincidence with GRBs, detecting neutrinos through the Cherenkov radiation emitted by secondary charged particles produced in neutrino interactions in the South Pole glacial ice. Restricting these searches to be in coincidence with GRB gamma-ray emis- sion, analyses can be performed with very little atmospheric background. Previous searches have focused on detecting muon tracks from muon neutrino interactions fromthe Northern Hemisphere, where the Earth shields IceCube’s primary background of atmospheric muons, or spherical cascade events from neutrinos of all flavors from the entire sky, with no compelling neutrino signal found. Neutrino searches from GRBs with IceCube have been extended to a search for muon tracks in the Southern Hemisphere in coincidence with 664 GRBs over five years of IceCube data in this dissertation. Though this region of the sky contains IceCube’s primary background of atmospheric muons, it is also where IceCube is most sensitive to neutrinos at the very highest energies as Earth absorption in the Northern Hemisphere becomes relevant. As previous neutrino searches have strongly constrained neutrino production in GRBs, a new per-GRB analysis is introduced for the first time to discover neutrinos in coincidence with possibly rare neutrino-bright GRBs. A stacked analysis is also performed to discover a weak neutrino signal distributed over many GRBs. Results of this search are found to be consistent with atmospheric muon backgrounds. Combining this result with previously published searches for muon neutrino tracks in the Northern Hemisphere, cascade event searches over the entire sky, and an extension of the Northern Hemisphere track search in three additional years of IceCube data that is consistent with atmospheric backgrounds, the most stringent limits yet can be placed on prompt neutrino production in GRBs, which increasingly disfavor GRBs as primary sources of UHECRs in current GRB models.