917 resultados para State-space models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.

The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The changes in internal states, such as fear, hunger and sleep affect behavioral responses in animals. In most of the cases, these state-dependent influences are “pleiotropic”: one state affects multiple sensory modalities and behaviors; “scalable”: the strengths and choices of such modulations differ depending on the imminence of demands; and “persistent”: once the state is switched on the effects last even after the internal demands are off. These prominent features of state-control enable animals to adjust their behavioral responses depending on their internal demands. Here, we studied the neuronal mechanisms of state-controls by investigating energy-deprived state (hunger state) and social-deprived state of fruit flies, Drosophila melanogaster, as prototypic models. To approach these questions, we developed two novel methods: a genetically based method to map sites of neuromodulation in the brain and optogenetic tools in Drosophila.

These methods, and genetic perturbations, reveal that the effect of hunger to alter behavioral sensitivity to gustatory cues is mediate by two distinct neuromodulatory pathways. The neuropeptide F (NPF) – dopamine (DA) pathway increases sugar sensitivity under mild starvation, while the adipokinetic hormone (AKH)- short neuropeptide F (sNPF) pathway decreases bitter sensitivity under severe starvation. These two pathways are recruited under different levels of energy demands without any cross interaction. Effects of both of the pathways are mediated by modulation of the gustatory sensory neurons, which reinforce the concept that sensory neurons constitute an important locus for state-dependent control of behaviors. Our data suggests that multiple independent neuromodulatory pathways are underlying pleiotropic and scalable effects of the hunger state.

In addition, using optogenetic tool, we show that the neural control of male courtship song can be separated into probabilistic/biasing, and deterministic/command-like components. The former, but not the latter, neurons are subject to functional modulation by social experience, supporting the idea that they constitute a locus of state-dependent influence. Interestingly, moreover, brief activation of the former, but not the latter, neurons trigger persistent behavioral response for more than 10 min. Altogether, these findings and new tools described in this dissertation offer new entry points for future researchers to understand the neuronal mechanism of state control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chapter I

Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.

Chapter II

A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.

EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.

EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.

Chapter III

A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steady state ion acceleration at the front of a cold solid target by a circularly polarized flat-top laser pulse is studied with one-dimensional particle-in-cell (PIC) simulation. A model that ions are reflected by a steady laser-driven piston is used by comparing with the electrostatic shock acceleration. A stable profile with a double-flat-top structure in phase space forms after ions enter the undisturbed region of the target with a constant velocity. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major part of the support for fundamental research on aquatic ecosystems continues to be provided by the Natural Environment Research Council (NERC). Funds are released for ”thematic” studies in a selected special topic or programme. ”Testable Models of Aquatic Ecosystems” was a Special Topic of the NERC, initiated in 1995, the aim of which was to promote ecological modelling by making new links between experimental aquatic biologists and state-of-the-art modellers. The Topic covered both marine and freshwater systems. This paper summarises projects on aspects of the responses of individual organisms to the effects of environmental variability, on the assembly, permanence and resilience of communities, and on aspects of spatial models. The authors conclude that the NERC Special Topic has been highly successful in promoting the development and application of models, most particularly through the interplay between experimental ecologists and formal modellers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surface plasma waves arise from the collective oscillations of billions of electrons at the surface of a metal in unison. The simplest way to quantize these waves is by direct analogy to electromagnetic fields in free space, with the surface plasmon, the quantum of the surface plasma wave, playing the same role as the photon. It follows that surface plasmons should exhibit all of the same quantum phenomena that photons do, including quantum interference and entanglement.

Unlike photons, however, surface plasmons suffer strong losses that arise from the scattering of free electrons from other electrons, phonons, and surfaces. Under some circumstances, these interactions might also cause “pure dephasing,” which entails a loss of coherence without absorption. Quantum descriptions of plasmons usually do not account for these effects explicitly, and sometimes ignore them altogether. In light of this extra microscopic complexity, it is necessary for experiments to test quantum models of surface plasmons.

In this thesis, I describe two such tests that my collaborators and I performed. The first was a plasmonic version of the Hong-Ou-Mandel experiment, in which we observed two-particle quantum interference between plasmons with a visibility of 93 ± 1%. This measurement confirms that surface plasmons faithfully reproduce this effect with the same visibility and mutual coherence time, to within measurement error, as in the photonic case.

The second experiment demonstrated path entanglement between surface plasmons with a visibility of 95 ± 2%, confirming that a path-entangled state can indeed survive without measurable decoherence. This measurement suggests that elastic scattering mechanisms of the type that might cause pure dephasing must have been weak enough not to significantly perturb the state of the metal under the experimental conditions we investigated.

These two experiments add quantum interference and path entanglement to a growing list of quantum phenomena that surface plasmons appear to exhibit just as clearly as photons, confirming the predictions of the simplest quantum models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A previdência social brasileira, apesar de constituir um dos modelos mais antigos e tradicionais de proteção social da América Latina, não muito distante dos modelos europeus quanto a sua gênese, passa por momentos difíceis. Em um contexto de rápido envelhecimento populacional, acelerada redução de natalidade e novas realidades de trabalho, nas quais a mão-de-obra assalariada perde seu espaço, o modelo tradicional de cobertura, nos moldes bismarckianos, carece de revisão, de forma a não somente adequar-se às novas premissas demográficas, mas permitir uma universalidade de cobertura efetiva. Para tanto, adota-se, como fundamento de um novo modelo, a justiça social em três dimensões necessidade, igualdade e mérito. A necessidade visa atender e assegurar a qualquer pessoa, dentro das necessidades sociais cobertas, um pagamento mínimo de forma a assegurar o mínimo existencial. A dimensão da igualdade, no viés material, visa preservar nível de bem-estar compatível, em alguma medida, com o usufruído durante a vida ativa. Já o mérito individual implica fornecer prestações mais elevadas aos que, conscientemente, reduziram o consumo presente, preservando parte de suas receitas para o futuro. As duas primeiras dimensões são, na proposta apresentada, organizadas pelo Estado, em pilares compulsórios e financiados, preponderantemente, por repartição simples. O modelo de financiamento adotado, no longo prazo, tem se mostrado mais seguro e isonômico frente a modelos capitalizados. As variantes demográficas podem ser adequadas mediante novos limites de idade para aposentadorias e, em especial, estímulo a natalidade, como novos serviços da previdência social, incluindo creches e pré-escolas. O terceiro pilar, fundado no mérito individual, é a previdência complementar, organizado de forma privada, autônoma e voluntária. Aqui, o financiamento sugerido é a capitalização, de forma a priorizar o rendimento e a eficiência, com as externalidades positivas para a economia e a sociedade, com risco assumido e aceitável em razão do papel subsidiário deste pilar protetivo. Os pilares estatais, no modelo proposto, serão financiados, exclusivamente, por impostos, pondo-se fim às contribuições sociais, que perdem a importância em um modelo universal de proteção. Troca-se a solidariedade do grupo pela solidariedade social e, como conseqüência, saem as contribuições e ingressam os impostos. Mesmo o segundo pilar, que visa prestações correlacionadas com os rendimentos em atividade, será financiado por adicional de imposto de renda. Sistema mais simples, eficaz, e com estímulo à formalização da receita por parte das pessoas. A gestão do modelo previdenciário, em todos os segmentos, contará com forte regulação estatal, mas com efetiva participação dos interessados, afastadas, dentro do possível, as ingerências políticas e formas de captura. A regulação previdenciária, desde adequadamente disciplinada e executada, permitirá que os pilares propostos funcionem em harmonia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principle aims of this thesis include the development of models of sublimation and melting from first principles and the application of these models to the rare gases.

A simple physical model is constructed to represent the sublimation of monatomic elements. According to this model, the solid and gas phases are two states of a single physical system. The nature of the phase transition is clearly revealed, and the relations between the vapor pressure, the latent heat, and the transition temperature are derived. The resulting theory is applied to argon, krypton, and xenon, and good agreement with experiment is found.

For the melting transition, the solid is represented by an anharmonic model and the liquid is described by the Percus-Yevick approximation. The behavior of the liquid at high densities is studied on the isotherms kT/∈ = 1.3, 1.8, and 2.0, where k is Boltzmann's constant, T is the temperature, and e is the well depth of the Lennard-Jones 12-6 pair potential. No solutions of the PercusYevick equation were found for ρσ3 above 1.3, where ρ is the particle density and σ is the radial parameter of the Lennard-Jones potential. The liquid structure is found to be very different from the solid structure near the melting line. The liquid pressures are about 50 percent low for experimental melting densities of argon. This discrepancy gives rise to melting pressures up to twice the experimental values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.

The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.