995 resultados para Botterill, Jason
Resumo:
ESSP 660 Advanced Watershed Science and Policy is a graduate class taught in the Master of Science in Coastal and Watershed Science & Policy program at California State University Monterey Bay. In 2007, the class was taught in four 4-week modules, each focusing on making a small contribution to a local watershed issue. This report describes the results of one of those 4-week modules – on Carmel Lagoon Water Quality and Ecology. The module was lead instructed by Fred Watson (CSUMB) and Kevan Urquhart (MPWMD). (Document contains 54 pages)
Resumo:
Variable watermilfoil (Myriophyllum heterophyllum Michx.) has recently become a problem in Bashan Lake, East Haddam, CT, USA. By 1998, approximately 4 ha of the 110 ha lake was covered with variable watermilfoil. In 1999, the milfoil was spot treated with Aquacide®, an 18% active ingredient of the sodium salt of 2,4-D [(2,4-dichlorophenoxy) acetic acid], applied at a rate of 114 kg/ha. Aquacide® was used because labeling regarding domestic water intakes and irrigation limitations prevented the use of Navigate® or AquaKleen®, a 19% active ingredient of the butoxyethyl ester of 2,4-D. Variable watermilfoil was partially controlled in shallow protected coves but little control occurred in deeper more exposed locations. 2,4-D levels in the treatment sites were lower than desired and offsite dilution was rapid. In 2000, the United States Environmental Protection Agency (USEPA) issued a special local need (SLN) registration to allow the use of Navigate ® or AquaKleen® in lakes with potable and irrigation water intakes. Navigate® was applied at a rate of 227 kg/ha to the same areas as treated in 1999. An additional 2 ha of variable watermilfoil was treated with Navigate® in 2001, and 0.4 ha was treated in mid-September. Dilution of the 2,4-D ester formulation to untreated areas was slower than with the salt formulation. Concentrations of 2,4-D exceeded 1000 μg/ L in several lake water samples in 2000 but not 2001. Nearly all of the treated variable watermilfoil was controlled in both years. The mid-September treatment appeared as effective as the spring and early summer treatments. Testing of homeowner wells in all 3 years found no detectable levels of 2,4-D.(PDF contains 8 pages.)
Resumo:
A study of aquatic plant biomass within Cayuga Lake, New York spans twelve years from 1987-1998. The exotic Eurasian watermilfoil ( Myriophyllum spicatum L.) decreased in the northwest end of the lake from 55% of the total biomass in 1987 to 0.4% in 1998 and within the southwest end from 50% in 1987 to 11% in 1998. Concurrent with the watermilfoil decline was the resurgence of native species of submersed macrophytes. During this time we recorded for the first time in Cayuga Lake two herbivorous insect species: the aquatic moth Acentria ephemerella , first observed in 1991, and the aquatic weevil Euhrychiopsis lecontei , first found in 1996 . Densities of Acentria in southwest Cayuga Lake averaged 1.04 individuals per apical meristem of Eurasian watermilfoil for the three-year period 1996-1998. These same meristems had Euhrychiopsis densities on average of only 0.02 individuals per apical meristem over the same three-year period. A comparison of herbivore densities and lake sizes from five lakes in 1997 shows that Acentria densities correlate positively with lake surface area and mean depth, while Euhrychiopsis densities correlate negatively with lake surface area and mean depth. In these five lakes, Acentria densities correlate negatively with percent composition and dry mass of watermilfoil. However, Euhrychiopsis densities correlate positively with percent composition and dry mass of watermilfoil. Finally, Acentria densities correlate negatively with Euhrychiopsis densities suggesting interspecific competition.
Resumo:
Jisc is sponsoring the innovation technology excellence category in the inaugural Herald Higher Education Awards, the first awards to recognise best practice specifically in Scottish higher education. Jason Miles-Campbell, head of Jisc Scotland and Jisc Northern Ireland, tells us about the awards and how to enter.
Resumo:
As coastal destinations continue to grow, due to tourism and residential expansion, the demand for public beach access and related amenities will also increase. As a resultagencies that provide beach access and related amenities face challenges when considering both residents and visitors use beaches and likely possess different needs, as well as different preferences for management decisions. Being a resident of a coastal county provides more opportunity to use local beaches, but coastal tourism is an important and growing economic engine in coastal communities (Kriesel, Landry, & Keeler, 2005; Pogue & Lee, 1999). Therefore, providing agencies with a comprehensive assessment of the differences between these two groups will increase the likelihood of effective management programs and policies for the provision of public beach access and related amenities. The purpose of this paper was to use a stated preference choice method (SPCM) to identify the extent of both residents’ and visitors’ preferences for public beach management options. (PDF contains 4 pages)
Resumo:
Pipes containing flammable gaseous mixtures may be subjected to internal detonation. When the detonation normally impinges on a closed end, a reflected shock wave is created to bring the flow back to rest. This study built on the work of Karnesky (2010) and examined deformation of thin-walled stainless steel tubes subjected to internal reflected gaseous detonations. A ripple pattern was observed in the tube wall for certain fill pressures, and a criterion was developed that predicted when the ripple pattern would form. A two-dimensional finite element analysis was performed using Johnson-Cook material properties; the pressure loading created by reflected gaseous detonations was accounted for with a previously developed pressure model. The residual plastic strain between experiments and computations was in good agreement.
During the examination of detonation-driven deformation, discrepancies were discovered in our understanding of reflected gaseous detonation behavior. Previous models did not accurately describe the nature of the reflected shock wave, which motivated further experiments in a detonation tube with optical access. Pressure sensors and schlieren images were used to examine reflected shock behavior, and it was determined that the discrepancies were related to the reaction zone thickness extant behind the detonation front. During these experiments reflected shock bifurcation did not appear to occur, but the unfocused visualization system made certainty impossible. This prompted construction of a focused schlieren system that investigated possible shock wave-boundary layer interaction, and heat-flux gauges analyzed the boundary layer behind the detonation front. Using these data with an analytical boundary layer solution, it was determined that the strong thermal boundary layer present behind the detonation front inhibits the development of reflected shock wave bifurcation.
Resumo:
The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.
Resumo:
We simulate incompressible, MHD turbulence using a pseudo-spectral code. Our major conclusions are as follows.
1) MHD turbulence is most conveniently described in terms of counter propagating shear Alfvén and slow waves. Shear Alfvén waves control the cascade dynamics. Slow waves play a passive role and adopt the spectrum set by the shear Alfvén waves. Cascades composed entirely of shear Alfvén waves do not generate a significant measure of slow waves.
2) MHD turbulence is anisotropic with energy cascading more rapidly along k⊥ than along k∥, where k⊥ and k∥ refer to wavevector components perpendicular and parallel to the local magnetic field. Anisotropy increases with increasing k⊥ such that excited modes are confined inside a cone bounded by k∥ ∝ kγ⊥ where γ less than 1. The opening angle of the cone, θ(k⊥) ∝ k-(1-γ)⊥, defines the scale dependent anisotropy.
3) MHD turbulence is generically strong in the sense that the waves which comprise it suffer order unity distortions on timescales comparable to their periods. Nevertheless, turbulent fluctuations are small deep inside the inertial range. Their energy density is less than that of the background field by a factor θ2 (k⊥)≪1.
4) MHD cascades are best understood geometrically. Wave packets suffer distortions as they move along magnetic field lines perturbed by counter propagating waves. Field lines perturbed by unidirectional waves map planes perpendicular to the local field into each other. Shear Alfvén waves are responsible for the mapping's shear and slow waves for its dilatation. The amplitude of the former exceeds that of the latter by 1/θ(k⊥) which accounts for dominance of the shear Alfvén waves in controlling the cascade dynamics.
5) Passive scalars mixed by MHD turbulence adopt the same power spectrum as the velocity and magnetic field perturbations.
6) Decaying MHD turbulence is unstable to an increase of the imbalance between the flux of waves propagating in opposite directions along the magnetic field. Forced MHD turbulence displays order unity fluctuations with respect to the balanced state if excited at low k by δ(t) correlated forcing. It appears to be statistically stable to the unlimited growth of imbalance.
7) Gradients of the dynamic variables are focused into sheets aligned with the magnetic field whose thickness is comparable to the dissipation scale. Sheets formed by oppositely directed waves are uncorrelated. We suspect that these are vortex sheets which the mean magnetic field prevents from rolling up.
8) Items (1)-(5) lend support to the model of strong MHD turbulence put forth by Goldreich and Sridhar (1995, 1997). Results from our simulations are also consistent with the GS prediction γ = 2/3. The sole not able discrepancy is that the 1D power law spectra, E(k⊥) ∝ k-∝⊥, determined from our simulations exhibit ∝ ≈ 3/2, whereas the GS model predicts ∝ = 5/3.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
Methods that exploit the intrinsic locality of molecular interactions show significant promise in making tractable the electronic structure calculation of large-scale systems. In particular, embedded density functional theory (e-DFT) offers a formally exact approach to electronic structure calculations in which the interactions between subsystems are evaluated in terms of their electronic density. In the following dissertation, methodological advances of embedded density functional theory are described, numerically tested, and applied to real chemical systems.
First, we describe an e-DFT protocol in which the non-additive kinetic energy component of the embedding potential is treated exactly. Then, we present a general implementation of the exact calculation of the non-additive kinetic potential (NAKP) and apply it to molecular systems. We demonstrate that the implementation using the exact NAKP is in excellent agreement with reference Kohn-Sham calculations, whereas the approximate functionals lead to qualitative failures in the calculated energies and equilibrium structures.
Next, we introduce density-embedding techniques to enable the accurate and stable calculation of correlated wavefunction (CW) in complex environments. Embedding potentials calculated using e-DFT introduce the effect of the environment on a subsystem for CW calculations (WFT-in-DFT). We demonstrate that WFT-in-DFT calculations are in good agreement with CW calculations performed on the full complex.
We significantly improve the numerics of the algorithm by enforcing orthogonality between subsystems by introduction of a projection operator. Utilizing the projection-based embedding scheme, we rigorously analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using CWs, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We develop an algorithm which corrects this term and demonstrate the accuracy of this corrected embedding scheme.
Resumo:
A presente tese investiga as dimensões históricas, filosóficas e políticas do conceito de comum, a partir de uma problematização influenciada pelos estudos marxistas heterodoxos e pelo pensamento de Michel Foucault. O percurso teórico inicia com a análise da hipótese da tragédia do comum, veiculado por Garret Hardin em um famoso artigo na Revista Science, em 1968. O desenvolvimento posterior busca compreender tal formulação a partir das análises foucaultianas sobre a arte de governar liberal e ngeoliberal, com ênfase nos conceitos de biopolítica e produção de subjetividade. Esse campo de análise é preenchido por estudos da corrente denominada bioeconomia, que busca entrelaçar a biopolítica com a compreensão das atuais formas de crise e acumulação capitalistas. A partir de uma pesquisa que se direciona para o campo definido como marxismo heterodoxo, busca-se estudar a relação entre o comum e os novos modos de acumulação primitiva, percebendo como o primeiro conceito passa a ocupar progressivamente essa corrente de estudos críticos. Nesse domínio, enfatiza-se a concepção de acumulação primitiva social e de subjetividade, com base em estudos de Karl Marx (Grundrisse), Antonio Negri e Jason Read. O último capítulo é dedicado ao conceito de produção do comum, tendo como ponto de partida o trabalho de Jean-Luc-Nancy e, principalmente, as investigações de Antonio Negri e Michael Hardt. O comum aparece como conceito central para a compreensão da produção biopolítica da riqueza social no capitalismo contemporâneo, e também sua expropriação por novos modos de acumulação. Por outro lado, o comum também emerge como antagonismo ao capital e à dicotomia público-privado, apontando para novas formas de compreender o comunismo.
Resumo:
Distribution, movements, and habitat use of small (<46 cm, juveniles and individuals of unknown maturity) striped bass (Morone saxatilis) were investigated with multiple techniques and at multiple spatial scales (surveys and tag-recapture in the estuary and ocean, and telemetry in the estuary) over multiple years to determine the frequency and duration of use of non-natal estuaries. These unique comparisons suggest, at least in New Jersey, that smaller individuals (<20 cm) may disperse from natal estuaries and arrive in non-natal estuaries early in life and take up residence for several years. During this period of estuarine residence, individuals spend all seasons primarily in the low salinity portions of the estuary. At larger sizes, they then leave these non-natal estuaries to begin coastal migrations with those individuals from nurseries in natal estuaries. These composite observations of frequency and duration of habitat use indicate that non-natal estuaries may provide important habitat for a portion of the striped bass population.
Resumo:
Assessing the vulnerability of stocks to fishing practices in U.S. federal waters was recently highlighted by the National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration, as an important factor to consider when 1) identifying stocks that should be managed and protected under a fishery management plan; 2) grouping data-poor stocks into relevant management complexes; and 3) developing precautionary harvest control rules. To assist the regional fishery management councils in determining vulnerability, NMFS elected to use a modified version of a productivity and susceptibility analysis (PSA) because it can be based on qualitative data, has a history of use in other fisheries, and is recommended by several organizations as a reasonable approach for evaluating risk. A number of productivity and susceptibility attributes for a stock are used in a PSA and from these attributes, index scores and measures of uncertainty are computed and graphically displayed. To demonstrate the utility of the resulting vulnerability evaluation, we evaluated six U.S. fisheries targeting 162 stocks that exhibited varying degrees of productivity and susceptibility, and for which data quality varied. Overall, the PSA was capable of differentiating the vulnerability of stocks along the gradient of susceptibility and productivity indices, although fixed thresholds separating low-, moderate-, and highly vulnerable species were not observed. The PSA can be used as a flexible tool that can incorporate regional-specific information on fishery and management activity.
Resumo:
Endoparasitic helminths were inventoried in 483 American plaice (Hippoglossoides platessoides) collected from the southern Gulf of St. Lawrence, NAFO (North Atlantic Fisheries Organization) division 4T, and Cape Breton Shelf (NAFO subdivision 4Vn) in September 2004 and May 2003, respectively. Forward stepwise discriminant function analysis (DFA) of the 4T samples indicated that abundances of the acanthocephalans Echinorhynchus gadi and Corynosoma strumosum were significant in the classification of plaice to western or eastern 4T. Cross validation yielded a correct classification rate of 79% overall, thereby supporting the findings of earlier mark-recapture studies which have indicated that 4T plaice comprise two discrete stocks: a western and an eastern stock. Further analyses including 4Vn samples, however, indicated that endoparasitic helminths may have little value as tags in the classification of plaice overwintering in Laurentian Channel waters of the Cabot Strait and Cape Breton Shelf, where mixing of 4T and 4Vn fish may occur.