927 resultados para Conflicts of interest
Resumo:
The development of catalysts that selectively oligomerize light olefins for uses in polymers and fuels remains of interest to the petrochemical and materials industry. For this purpose, two tantalum compounds, (FI)TaMe2Cl2 and (FI)TaMe4, implementing a previously reported phenoxy-imine (FI) ligand framework, have been synthesized and characterized with NMR spectroscopy and X-ray crystallography. When tested for ethylene oligomerization catalysis, (FI)TaMe2Cl2 was found to dimerize ethylene when activated with Et2Zn or EtMgCl, and (FI)TaMe4 dimerized ethylene when activated with B(C6F5)3, both at room temperature.
Resumo:
A presente tese tem por objetivo tratar do conflito de interesses na sociedade de economia mista. Referida sociedade tem na sua base constitutiva o conflito como elemento inerente. Por possuir capital público e privado, não é fácil solucionar facilmente os problemas que se apresentam no decorrer de sua existência frente à possibilidade de que o detentor do poder de controle pode decidir em prol do interesse público. E é visando limitar a má utilização do interesse público como resposta à tomada de decisões por parte do acionista controlador que se propõe uma mudança de paradigma. Para tanto, propõe-se analisar o papel do Estado empresário na atual conjuntura de limitação de intervenção do Estado na economia. Também é abordada a forma com que o poder Executivo vem intervindo no mercado, de forma a limitar a livre iniciativa e, por vezes, até mesmo eivada de certa inconstitucionalidade. No entanto, para evitar que haja afronta à Constituição no que diz respeito à exploração de atividade econômica por parte do Estado sem que sejam observados os limites constitucionais impostos, apresenta-se o meta-interesse como meio de solução. Sendo o meta-interesse o interesse da própria companhia, e considerando que o interesse público que fundamenta a autorização para a criação da sociedade de economia mista se extingue com a criação da referida companhia, tem-se que as normas que devem reger as sociedades de economia mista são as normas de direito privado. Com o meta-interesse o Estado passa a intervir na seara privada em igualdade de condições com as demais companhias, não podendo mais se valer de sua posição de acionista majoritário para tomar decisões que conflitem com o interesse da companhia e que privilegiem o interesse público secundário ou até mesmo o interesse político do Estado em detrimento do interesse social e dos acionistas minoritários. Dessa forma, o meta-interesse tem por finalidade colocar fim aos conflitos em relação à aplicação das normas jurídicas e as indefinições da própria natureza da sociedade de economia mista.
Resumo:
The aim of this study was to compare statistically the zooplankton assemblage ingested by brown trout (Salmo trutta) in Loch Ness with that of the zooplankton in the water column. This would allow the examination of the apparent paradox that very few copepods appear to be consumed by trout at a time of year when they are numerous and readily available as food. The investigation was limited to the crustacean zooplankters, since the Rotifera are generally so small that they are only of interest to fish in the first few days of life. 25 trout were obtained from anglers, and the stomach contents of non-"ferox" animals analysed. Samples of pelagic zooplankton were obtained approximately monthly from 30-m vertical net-hauls (mesh size 100 km). It is concluded that the variation in dietary composition with trout wet weight indicates an ontogenetic habitat shift producing spatial separation of young and older individuals.
Resumo:
The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.
Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.
Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.
When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.
The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
This information is based on the limnological observations of the very interesting and remarkable discharge of the Lunzer Obersee, (1117m). On journeying towards the Mittersee, the Lunzer Obersee discharge takes on all the characteristics of spring-water; it was found to be of interest to take note of the change of the composition of the water on those stretches of the course where it flows shows the ground. For this purpose it was necessary to take temperature-recordings, chemical examination of the water and a quantitative determination of the plankton. Samples were taken in 1954 of zooplankton at different times of the year at the discharge of the lake and analysed. The significant loss of organisms in the way of discharge is discussed.
Resumo:
The nuclear resonant reaction 19F(ρ,αγ)16O has been used to perform depth-sensitive analyses of fluorine in lunar samples and carbonaceous chondrites. The resonance at 0.83 MeV (center-of-mass) in this reaction is utilized to study fluorine surface films, with particular interest paid to the outer micron of Apollo 15 green glass, Apollo 17 orange glass, and lunar vesicular basalts. These results are distinguished from terrestrial contamination, and are discussed in terms of a volcanic origin for the samples of interest. Measurements of fluorine in carbonaceous chondrites are used to better define the solar system fluorine abundance. A technique for measurement of carbon on solid surfaces with applications to direct quantitative analysis of implanted solar wind carbon in lunar samples is described.
Resumo:
I. The attenuation of sound due to particles suspended in a gas was first calculated by Sewell and later by Epstein in their classical works on the propagation of sound in a two-phase medium. In their work, and in more recent works which include calculations of sound dispersion, the calculations were made for systems in which there was no mass transfer between the two phases. In the present work, mass transfer between phases is included in the calculations.
The attenuation and dispersion of sound in a two-phase condensing medium are calculated as functions of frequency. The medium in which the sound propagates consists of a gaseous phase, a mixture of inert gas and condensable vapor, which contains condensable liquid droplets. The droplets, which interact with the gaseous phase through the interchange of momentum, energy, and mass (through evaporation and condensation), are treated from the continuum viewpoint. Limiting cases, for flow either frozen or in equilibrium with respect to the various exchange processes, help demonstrate the effects of mass transfer between phases. Included in the calculation is the effect of thermal relaxation within droplets. Pressure relaxation between the two phases is examined, but is not included as a contributing factor because it is of interest only at much higher frequencies than the other relaxation processes. The results for a system typical of sodium droplets in sodium vapor are compared to calculations in which there is no mass exchange between phases. It is found that the maximum attenuation is about 25 per cent greater and occurs at about one-half the frequency for the case which includes mass transfer, and that the dispersion at low frequencies is about 35 per cent greater. Results for different values of latent heat are compared.
II. In the flow of a gas-particle mixture through a nozzle, a normal shock may exist in the diverging section of the nozzle. In Marble’s calculation for a shock in a constant area duct, the shock was described as a usual gas-dynamic shock followed by a relaxation zone in which the gas and particles return to equilibrium. The thickness of this zone, which is the total shock thickness in the gas-particle mixture, is of the order of the relaxation distance for a particle in the gas. In a nozzle, the area may change significantly over this relaxation zone so that the solution for a constant area duct is no longer adequate to describe the flow. In the present work, an asymptotic solution, which accounts for the area change, is obtained for the flow of a gas-particle mixture downstream of the shock in a nozzle, under the assumption of small slip between the particles and gas. This amounts to the assumption that the shock thickness is small compared with the length of the nozzle. The shock solution, valid in the region near the shock, is matched to the well known small-slip solution, which is valid in the flow downstream of the shock, to obtain a composite solution valid for the entire flow region. The solution is applied to a conical nozzle. A discussion of methods of finding the location of a shock in a nozzle is included.
Resumo:
My focus in this thesis is to contribute to a more thorough understanding of the mechanics of ice and deformable glacier beds. Glaciers flow under their own weight through a combination of deformation within the ice column and basal slip, which involves both sliding along and deformation within the bed. Deformable beds, which are made up of unfrozen sediment, are prevalent in nature and are often the primary contributors to ice flow wherever they are found. Their granular nature imbues them with unique mechanical properties that depend on the granular structure and hydrological properties of the bed. Despite their importance for understanding glacier flow and the response of glaciers to changing climate, the mechanics of deformable glacier beds are not well understood.
Our general approach to understanding the mechanics of bed deformation and their effect on glacier flow is to acquire synoptic observations of ice surface velocities and their changes over time and to use those observations to infer the mechanical properties of the bed. We focus on areas where changes in ice flow over time are due to known environmental forcings and where the processes of interest are largely isolated from other effects. To make this approach viable, we further develop observational methods that involve the use of mapping radar systems. Chapters 2 and 5 focus largely on the development of these methods and analysis of results from ice caps in central Iceland and an ice stream in West Antarctica. In Chapter 3, we use these observations to constrain numerical ice flow models in order to study the mechanics of the bed and the ice itself. We show that the bed in an Iceland ice cap deforms plastically and we derive an original mechanistic model of ice flow over plastically deforming beds that incorporates changes in bed strength caused by meltwater flux from the surface. Expanding on this work in Chapter 4, we develop a more detailed mechanistic model for till-covered beds that helps explain the mechanisms that cause some glaciers to surge quasi-periodically. In Antarctica, we observe and analyze the mechanisms that allow ocean tidal variations to modulate ice stream flow tens of kilometers inland. We find that the ice stream margins are significantly weakened immediately upstream of the area where ice begins to float and that this weakening likely allows changes in stress over the floating ice to propagate through the ice column.
Resumo:
The family of primitive prawns, Atyidae, are freshwater animals with a circumtropical distribution, but additionally some have penetrated into temperate regions. An intriguing aspect of their distribution is that, although they are, and have long been, freshwater crustaceans, they have succeeded in colonizing many oceanic islands. The West Indies is an area of interest, as representatives of several genera sometimes occur on one island. For its size, Dominica is particularly rich in this respect. The fauna of the island includes the most primitive living atyid (a West Indian endemic, Xiphocaris elongata) and 2 representatives of the most advanced genus, Atya. Each of the other 3 spp belongs to a separate genus. The feeding behaviour of the Dominican atyids is discussed in this article.
Resumo:
We propose a miniature pulse compressor that can be used to compensate the group velocity dispersion that is produced by a commercial femtosecond laser cavity. The compressor is composed of two identical highly efficient deep-etched transmissive gratings. Compared with prism pairs, highly efficient deep-etched transmissive grating pairs are lightweight and small. With an optimized groove depth and a duty cycle, 98% diffraction efficiency of the -1 transmissive order can be achieved at a wavelength of 800 nm under Littrow conditions. The deep-etched gratings are fabricated in fused silica by inductively coupled plasma etching. With a pair of the fabricated gratings, the input positively chirped 73.9 fs pulses are neatly compressed into the nearly Fourier transform-limited 43.2 fs pulses. The miniature deep-etched grating-based pulse compressor should be of interest for practical applications. (c) 2008 Optical Society of America
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
Two topics in plane strain perfect plasticity are studied using the method of characteristics. The first is the steady-state indentation of an infinite medium by either a rigid wedge having a triangular cross section or a smooth plate inclined to the direction of motion. Solutions are exact and results include deformation patterns and forces of resistance; the latter are also applicable for the case of incipient failure. Experiments on sharp wedges in clay, where forces and deformations are recorded, showed a good agreement with the mechanism of cutting assumed by the theory; on the other hand the indentation process for blunt wedges transforms into that of compression with a rigid part of clay moving with the wedge. Finite element solutions, for a bilinear material model, were obtained to establish a correspondence between the response of the plane strain wedge and its axi-symmetric counterpart, the cone. Results of the study afford a better understanding of the process of indentation of soils by penetrometers and piles as well as the mechanism of failure of deep foundations (piles and anchor plates).
The second topic concerns the plane strain steady-state free rolling of a rigid roller on clays. The problem is solved approximately for small loads by getting the exact solution of two problems that encompass the one of interest; the first is a steady-state with a geometry that approximates the one of the roller and the second is an instantaneous solution of the rolling process but is not a steady-state. Deformations and rolling resistance are derived. When compared with existing empirical formulae the latter was found to agree closely.
Resumo:
The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.
In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.
In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.
Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.
The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".
Resumo:
221 p.+ anexos