959 resultados para General Stores -- Ontario -- Stamford -- Sources


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

General Relativity predicts the existence of gravitational waves, which carry information about the physical and dynamical properties of their source. One of the many promising sources of gravitational waves observable by ground-based instruments, such as in LIGO and Virgo, is the coalescence of two compact objects (neutron star or black hole). Black holes and neutron stars sometimes form binaries with short orbital periods, radiating so strongly in gravitational waves that they coalesce on astrophysically short timescales. General Relativity gives precise predictions for the form of the signal emitted by these systems. The most recent searches for theses events used waveform models that neglected the effects of black hole and neutron star spin. However, real astrophysical compact objects, especially black holes, are expected to have large spins. We demonstrate here a data analysis infrastructure which achieves an improved sensitivity to spinning compact binaries by the inclusion of spin effects in the template waveforms. This infrastructure is designed for scalable, low-latency data analysis, ideal for rapid electromagnetic followup of gravitational wave events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methods that exploit the intrinsic locality of molecular interactions show significant promise in making tractable the electronic structure calculation of large-scale systems. In particular, embedded density functional theory (e-DFT) offers a formally exact approach to electronic structure calculations in which the interactions between subsystems are evaluated in terms of their electronic density. In the following dissertation, methodological advances of embedded density functional theory are described, numerically tested, and applied to real chemical systems.

First, we describe an e-DFT protocol in which the non-additive kinetic energy component of the embedding potential is treated exactly. Then, we present a general implementation of the exact calculation of the non-additive kinetic potential (NAKP) and apply it to molecular systems. We demonstrate that the implementation using the exact NAKP is in excellent agreement with reference Kohn-Sham calculations, whereas the approximate functionals lead to qualitative failures in the calculated energies and equilibrium structures.

Next, we introduce density-embedding techniques to enable the accurate and stable calculation of correlated wavefunction (CW) in complex environments. Embedding potentials calculated using e-DFT introduce the effect of the environment on a subsystem for CW calculations (WFT-in-DFT). We demonstrate that WFT-in-DFT calculations are in good agreement with CW calculations performed on the full complex.

We significantly improve the numerics of the algorithm by enforcing orthogonality between subsystems by introduction of a projection operator. Utilizing the projection-based embedding scheme, we rigorously analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using CWs, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We develop an algorithm which corrects this term and demonstrate the accuracy of this corrected embedding scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Petróleo e gás natural são recursos naturais não renováveis que possuem grande participação na matriz energética mundial e tendência de crescimento na matriz nacional, cujo marco regulatório limita-se a definir critérios técnicos e procedimentais sem incorporar o modelo de sustentabilidade instituído pela Constituição Federal de 1988. A natureza finita dos recursos não renováveis, como o petróleo e o gás natural, exige uma visão do planejamento de sua exploração de longo prazo na definição dos objetivos e metas. Essa perspectiva de longo prazo traduz uma das preocupações do desenvolvimento sustentável: a garantia de direitos para as futuras gerações. Assim, ao procurar fornecer elementos para a tradução do modelo de desenvolvimento sustentável no arcabouço institucional e legal da indústria petrolífera vigente no Brasil, o presente trabalho busca contribuir para o aprimoramento da regulação petrolífera nacional e a qualidade de vida das gerações presentes e futuras. E, mais do que propor a elaboração de um projeto de lei, como modalidade de implantação de uma política pública, queremos contribuir para o fortalecimento das práticas e ações governamentais voltadas para a aplicação do desenvolvimento sustentável, consoante apregoa a Constituição Federal brasileira. Trata-se aqui de demonstrar, através de metodologia quali-quantitativa, a tese de que é possível incorporar o princípio constitucional de desenvolvimento sustentável na atividade de exploração e produção de petróleo e gás natural, formulando uma política pública que incorpore, no regime de propriedade do petróleo, a variável ambiental e o uso intergeracional que já haviam sido e continuam sendo aplicados a algumas fontes renováveis de energia. Inicialmente, identificamos a composição da matriz energética brasileira desde a inserção do petróleo como uma questão de Estado a partir dos anos 50 do século XX. Em seguida, analisamos a concepção legal e doutrinária para propor, então, a conceituação de um modelo de desenvolvimento energético sustentável, estruturante para a proposição de uma política nacional para a indústria petrolífera. Com base nessa conceituação, analisamos o marco regulatório e os procedimentos institucionais praticados atualmente para identificar as lacunas existentes no ordenamento a serem supridas pela política nacional proposta. A partir da análise dos contextos legal e institucional, e das políticas energética e ambiental, propomos a tradução de conceitos, objetivos, princípios e instrumentos num projeto de lei de Política Nacional de Uso Sustentável das Reservas de Petróleo e Gás Natural. Concluímos tecendo considerações gerais e específicas sobre a proposição aqui formulada com vistas ao aprimoramento do modelo nacional de gestão de recursos energéticos e ao fomento das discussões voltadas para a sustentabilidade das políticas públicas e as práticas privadas enraizadas na exploração irracional de recursos não renováveis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of two separate parts. Part I (Chapter 1) is concerned with seismotectonics of the Middle America subduction zone. In this chapter, stress distribution and Benioff zone geometry are investigated along almost 2000 km of this subduction zone, from the Rivera Fracture Zone in the north to Guatemala in the south. Particular emphasis is placed on the effects on stress distribution of two aseismic ridges, the Tehuantepec Ridge and the Orozco Fracture Zone, which subduct at seismic gaps. Stress distribution is determined by studying seismicity distribution, and by analysis of 190 focal mechanisms, both new and previously published, which are collected here. In addition, two recent large earthquakes that have occurred near the Tehuantepec Ridge and the Orozco Fracture Zone are discussed in more detail. A consistent stress release pattern is found along most of the Middle America subduction zone: thrust events at shallow depths, followed down-dip by an area of low seismic activity, followed by a zone of normal events at over 175 km from the trench and 60 km depth. The zone of low activity is interpreted as showing decoupling of the plates, and the zone of normal activity as showing the breakup of the descending plate. The portion of subducted lithosphere containing the Orozco Fracture Zone does not differ significantly, in Benioff zone geometry or in stress distribution, from adjoining segments. The Playa Azul earthquake of October 25, 1981, Ms=7.3, occurred in this area. Body and surface wave analysis of this event shows a simple source with a shallow thrust mechanism and gives Mo=1.3x1027 dyne-cm. A stress drop of about 45 bars is calculated; this is slightly higher than that of other thrust events in this subduction zone. In the Tehuantepec Ridge area, only minor differences in stress distribution are seen relative to adjoining segments. For both ridges, the only major difference from adjoining areas is the infrequency or lack of occurrence of large interplate thrust events.

Part II involves upper mantle P wave structure studies, for the Canadian shield and eastern North America. In Chapter 2, the P wave structure of the Canadian shield is determined through forward waveform modeling of the phases Pnl, P, and PP. Effects of lateral heterogeneity are kept to a minimum by using earthquakes just outside the shield as sources, with propagation paths largely within the shield. Previous mantle structure studies have used recordings of P waves in the upper mantle triplication range of 15-30°; however, the lack of large earthquakes in the shield region makes compilation of a complete P wave dataset difficult. By using the phase PP, which undergoes triplications at 30-60°, much more information becomes available. The WKBJ technique is used to calculate synthetic seismograms for PP, and these records are modeled almost as well as the P. A new velocity model, designated S25, is proposed for the Canadian shield. This model contains a thick, high-Q, high-velocity lid to 165 km and a deep low-velocity zone. These features combine to produce seismograms that are markedly different from those generated by other shield structure models. The upper mantle discontinuities in S25 are placed at 405 and 660 km, with a simple linear gradient in velocity between them. Details of the shape of the discontinuities are not well constrained. Below 405 km, this model is not very different from many proposed P wave models for both shield and tectonic regions.

Chapter 3 looks in more detail at recordings of Pnl in eastern North America. First, seismograms from four eastern North American earthquakes are analyzed, and seismic moments for the events are calculated. These earthquakes are important in that they are among the largest to have occurred in eastern North America in the last thirty years, yet in some cases were not large enough to produce many good long-period teleseismic records. A simple layer-over-a-halfspace model is used for the initial modeling, and is found to provide an excellent fit for many features of the observed waveforms. The effects on Pnl of varying lid structure are then investigated. A thick lid with a positive gradient in velocity, such as that proposed for the Canadian shield in Chapter 2, will have a pronounced effect on the waveforms, beginning at distances of 800 or 900 km. Pnl records from the same eastern North American events are recalculated for several lid structure models, to survey what kinds of variations might be seen. For several records it is possible to see likely effects of lid structure in the data. However, the dataset is too sparse to make any general observations about variations in lid structure. This type of modeling is expected to be important in the future, as the analysis is extended to more recent eastern North American events, and as broadband instruments make more high-quality regional recordings available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EU]Gaur egunean bizi dugun krisi ekonomiko, produkzio ereduen transformazio eta globalizazioko testuinguruan, ikerketa, garapena eta berrikuntza (I+G+b) ezinbesteko faktoreak bilakatu dira herrialdeen hazkunde eta enpresen lidergo zein biziraupenerako. Berritasunaren bilatze horretan, egungo merkatuan enpresek etengabe egin beharreko egokitzapen teknologikoak direla eta, berrikuntza teknologikoak, hau da, produktuan edo prozesuan berritzeak, berebiziko garrantzia du. Gainera, oro har, lehiakortasunik ez galtzeko informazio estrategikoaren jabe izatea enpresentzat funtsezkoa bada ere, berrikuntza jarduerak aurrera eramateko oraindik ere gailentasun handiagoa dauka. Informazio estrategikoa iturri ezberdinetatik jaso daiteke eta horietako bakoitzari ematen zaion garrantzia ezberdina da. Lan honen helburua erlazio hori aztertzearena izanik, ondoriozta dezakegu, alde batetik, produktuan zein prozesuan berritzen duten enpresek, berrikuntza teknologikoak aurrera eramateko garaian, enpresa barnetik jasotako informazioari ematen diotela garrantzi gehien. Eta bestaldetik, instituzio publikoetatik, hau da, unibertsitate edo goi-ikasketetako zentroetatik, ikerketako erakunde publikoetatik edota zentro teknologikoetatik jasotako informazioari garrantzi gutxi ematen zaiola.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estudo qualitativo, com base na Análise de Conteúdo, a partir da interrogação sobre como o usuário de drogas em situação de tratamento para a dependência química percebe-se como trabalhador, e qual sua relação com o mundo do trabalho. O referencial teórico apoiou-se nas definições de drogas psicotrópicas, dependência química e trabalho, das seguintes fontes, respectivamente, Organização das Nações Unidas (ONU), Classificação Internacional das Doenças (CID-10) e Ministério do Trabalho (MT) e a teoria da Psicodinâmica do trabalho. A metodologia baseou-se na Análise de Conteúdo Temática. A coleta de dados desenvolveu-se por meio de entrevistas em profundidade, e pela obtenção de dados sociodemográficos e de trabalho a partir dos registros. Foram realizadas trinta e oitos entrevistas semi estruturadas com dez mulheres, quatorze homens usuários de múltiplas drogas e quatorze homens usuários somente de bebidas alcoólicas. Todos os sujeitos eram trabalhadores em tratamento no Centro de Atenção Psicossocial-álcool e drogas Centro Estadual de Tratamento e Reabilitação de Adictos (Caps-ad CENTRA-RIO). Foram organizadas e analisadas seis categorias: conciliação ente o trabalho e o consumo de drogas; trabalho e angústia; o mundo do trabalho favorecendo/estimulando o consumo; beneficio e UD; trabalho pleno; perspectivas de vida do UD. Discutiu-se a intensa relação de sofrimento que permeia o tempo todo o trabalhador usuário de drogas, as diversas alternativas experimentadas para conseguir conciliar o binômio trabalho e drogas, a submissão aos valores construídos no ambiente de trabalho, a função de integradora intragrupo dos trabalhadores e terapêutica das drogas para conseguir cumprir o trabalho real, sua função relaxante e domadora da angústia e do medo. Nas conclusões, discutem-se os limites do mundo do trabalho em acompanhar a evolução do campo da saúde mental e a Reforma Psiquiátrica, e a importância para a enfermagem do trabalho. O enfermeiro deve ser o articulador para que este trabalhador usuário de drogas atue como protagonista ativo da sua própria história, contribuindo com sua experiência, para que os profissionais de saúde, o trabalho e a sociedade em geral aperfeiçoem formas de cuidar integral.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of two parts. In Part I, we develop a multipole moment formalism in general relativity and use it to analyze the motion and precession of compact bodies. More specifically, the generic, vacuum, dynamical gravitational field of the exterior universe in the vicinity of a freely moving body is expanded in positive powers of the distance r away from the body's spatial origin (i.e., in the distance r from its timelike-geodesic world line). The expansion coefficients, called "external multipole moments,'' are defined covariantly in terms of the Riemann curvature tensor and its spatial derivatives evaluated on the body's central world line. In a carefully chosen class of de Donder coordinates, the expansion of the external field involves only integral powers of r ; no logarithmic terms occur. The expansion is used to derive higher-order corrections to previously known laws of motion and precession for black holes and other bodies. The resulting laws of motion and precession are expressed in terms of couplings of the time derivatives of the body's quadrupole and octopole moments to the external moments, i.e., to the external curvature and its gradient.

In part II, we study the interaction of magnetohydrodynamic (MHD) waves in a black-hole magnetosphere with the "dragging of inertial frames" effect of the hole's rotation - i.e., with the hole's "gravitomagnetic field." More specifically: we first rewrite the laws of perfect general relativistic magnetohydrodynamics (GRMHD) in 3+1 language in a general spacetime, in terms of quantities (magnetic field, flow velocity, ...) that would be measured by the ''fiducial observers” whose world lines are orthogonal to (arbitrarily chosen) hypersurfaces of constant time. We then specialize to a stationary spacetime and MHD flow with one arbitrary spatial symmetry (e.g., the stationary magnetosphere of a Kerr black hole); and for this spacetime we reduce the GRMHD equations to a set of algebraic equations. The general features of the resulting stationary, symmetric GRMHD magnetospheric solutions are discussed, including the Blandford-Znajek effect in which the gravitomagnetic field interacts with the magnetosphere to produce an outflowing jet. Then in a specific model spacetime with two spatial symmetries, which captures the key features of the Kerr geometry, we derive the GRMHD equations which govern weak, linealized perturbations of a stationary magnetosphere with outflowing jet. These perturbation equations are then Fourier analyzed in time t and in the symmetry coordinate x, and subsequently solved numerically. The numerical solutions describe the interaction of MHD waves with the gravitomagnetic field. It is found that, among other features, when an oscillatory external force is applied to the region of the magnetosphere where plasma (e+e-) is being created, the magnetosphere responds especially strongly at a particular, resonant, driving frequency. The resonant frequency is that for which the perturbations appear to be stationary (time independent) in the common rest frame of the freshly created plasma and the rotating magnetic field lines. The magnetosphere of a rotating black hole, when buffeted by nonaxisymmetric magnetic fields anchored in a surrounding accretion disk, might exhibit an analogous resonance. If so then the hole's outflowing jet might be modulated at resonant frequencies ω=(m/2) ΩH where m is an integer and ΩH is the hole's angular velocity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Herein are described the total syntheses of all members of the transtaganolide and basiliolide natural product family. Utilitzation of an Ireland–Claisen rearrangement/Diels–Alder cycloaddition cascade (ICR/DA) allowed for rapid assembly of the transtaganolide and basiliolide oxabicyclo[2.2.2]octane core. This methodology is general and was applicable to all members of the natural product family.

A brief introduction outlines all the synthetic progress previously disclosed by Lee, Dudley, and Johansson. This also includes the initial syntheses of transtaganolides C and D, as well as basiliolide B and epi-basiliolide B accomplished by Stoltz in 2011. Lastly, we discuss our racemic synthesis of basililide C and epi-basiliolide C, which utilized an ICR/DA cascade to constuct the oxabicyclo[2.2.2]octane core and formal [5+2] annulation to form the ketene-acetal containing 7-membered C-ring.

Next, we describe a strategy for an asymmetric ICR/DA cascade, by incorporation of a chiral silane directing group. This allowed for enantioselective construction of the C8 all-carbon quaternary center formed in the Ireland–Claisen rearrangement. Furthermore, a single hydride reduction and subsequent translactonization of a C4 methylester bearing oxabicyclo[2.2.2]octane core demonstrated a viable strategy for the desired skeletal rearrangement to obtain pentacyclic transtaganolides A and B. Application of the asymmetric strategy culminated in the total syntheses of (–)-transtaganolide A, (+)-transtaganolide B, (+)-transtaganolide C, and (–)-transtaganolide D. Comparison of the optical rotation data of the synthetically derived transtaganolides to that from the isolated counterparts has overarching biosynthetic implications which are discussed.

Lastly, improvement to the formal [5+2] annulation strategy is described. Negishi cross-coupling of methoxyethynyl zinc chloride using a palladium Xantphos catalyst is optimized for iodo-cyclohexene. Application of this technology to an iodo-pyrone geranyl ester allowed for formation and isolation of the eneyne product. Hydration of the enenye product forms natural metabolite basiliopyrone. Furthermore, the eneyne product can undergo an ICR/DA cascade and form transtaganolides C and D in a single step from an achiral monocyclic precursor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.