983 resultados para Nor-BNI


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.

In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.

We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.

As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.

Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.

Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.

We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.

We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Designing for all requires the adaptation and modification of current design best practices to encompass a broader range of user capabilities. This is particularly the case in the design of the human-product interface. Product interfaces exist everywhere and when designing them, there is a very strong temptation to jump to prescribing a solution with only a cursory attempt to understand the nature of the problem. This is particularly the case when attempting to adapt existing designs, optimised for able-bodied users, for use by disabled users. However, such approaches have led to numerous products that are neither usable nor commercially successful. In order to develop a successful design approach it is necessary consider the fundamental structure of the design process being applied. A three stage design process development strategy which includes problem definition, solution development and solution evaluation, should be adopted. This paper describes the development of a new design approach based on the application of usability heuristics to the design of interfaces. This is illustrated by reference to a particular case study of the re-design of a computer interface for controlling an assistive device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo se encuentra bajo la licencia Creative Commons Attribution 3.0.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In June 1994 and 1995 stations in the North, Irish, Celtic Seas and the Channel were studied for the occurrence of Myxobolus aeglefini in whiting (Merlangius merlangus). The disease was visible externally as either white nodules of a few millimeters diameter in the upper mouth cavity, gill arches and the basis of pelvic fins and in severe cases also on the lower jaws or in the cornea and sclera of the eye. It was verified morphometrically in histological sections of infected eyes by size and shape of spores. Myxobolus aeglefini was present in low prevalences at two North Sea stations and high prevalences of up to 49 % in the Irish Sea (Solway Firth) during both cruises. Whiting between 23 and 55 cm were found to be infected. Neither length-age prevalences nor condition factors and gonado, spleen, liver somatic indices differed in diseased and healthy fishes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assembling a nervous system requires exquisite specificity in the construction of neuronal connectivity. One method by which such specificity is implemented is the presence of chemical cues within the tissues, differentiating one region from another, and the presence of receptors for those cues on the surface of neurons and their axons that are navigating within this cellular environment.

Connections from one part of the nervous system to another often take the form of a topographic mapping. One widely studied model system that involves such a mapping is the vertebrate retinotectal projection-the set of connections between the eye and the optic tectum of the midbrain, which is the primary visual center in non-mammals and is homologous to the superior colliculus in mammals. In this projection the two-dimensional surface of the retina is mapped smoothly onto the two-dimensional surface of the tectum, such that light from neighboring points in visual space excites neighboring cells in the brain. This mapping is implemented at least in part via differential chemical cues in different regions of the tectum.

The Eph family of receptor tyrosine kinases and their cell-surface ligands, the ephrins, have been implicated in a wide variety of processes, generally involving cellular movement in response to extracellular cues. In particular, they possess expression patterns-i.e., complementary gradients of receptor in retina and ligand in tectum- and in vitro and in vivo activities and phenotypes-i.e., repulsive guidance of axons and defective mapping in mutants, respectively-consistent with the long-sought retinotectal chemical mapping cues.

The tadpole of Xenopus laevis, the South African clawed frog, is advantageous for in vivo retinotectal studies because of its transparency and manipulability. However, neither the expression patterns nor the retinotectal roles of these proteins have been well characterized in this system. We report here comprehensive descriptions in swimming stage tadpoles of the messenger RNA expression patterns of eleven known Xenopus Eph and ephrin genes, including xephrin-A3, which is novel, and xEphB2, whose expression pattern has not previously been published in detail. We also report the results of in vivo protein injection perturbation studies on Xenopus retinotectal topography, which were negative, and of in vitro axonal guidance assays, which suggest a previously unrecognized attractive activity of ephrins at low concentrations on retinal ganglion cell axons. This raises the possibility that these axons find their correct targets in part by seeking out a preferred concentration of ligands appropriate to their individual receptor expression levels, rather than by being repelled to greater or lesser degrees by the ephrins but attracted by some as-yet-unknown cue(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.

A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.

The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.

Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.

The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.

Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.

PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.

Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.

In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.

In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.

The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.

In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.

In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O consumidor é o agente vulnerável na relação de consumo internacional. O processo de globalização se apresenta, para o consumidor, como uma globalização do consumo. A globalização do consumo se caracteriza pelo comércio e fornecimento internacional de produtos e serviços por empresários/fornecedores transnacionais/globais, utilizando marcas de renome mundial, acessíveis a todos os consumidores do planeta, e agrava a vulnerabilidade do consumidor no mercado. A proteção jurídica do consumidor internacional é uma necessidade que os sistemas jurídicos nacionais não se mostram aptos a prover adequadamente, assim como o Direito Internacional também não. A presente tese demonstra a deficiência da Ciência do Direito na proteção do consumidor no contexto da globalização; demonstra como o próprio comércio internacional é prejudicado ao não priorizar de maneira absoluta e efetiva a proteção do consumidor na OMC, bem como ao mostrar-se apático diante dos diferentes níveis de proteção proporcionada aos consumidores em cada diferente sistema jurídico nacional; demonstra, também, como a proteção do consumidor de maneira uniforme e global por um direito comum aos Estados é possível e será capaz de tornar mais eficiente economicamente o processo de globalização do consumo, ao encorajar a participação mais intensa do consumidor no mercado internacional; e propõe a construção de um novo ramo do Direito dedicado ao problema, o Direito Internacional do Consumidor (DIC), por meio da elaboração de uma Teoria do Direito Internacional do Consumidor. O Direito Internacional do Consumidor pretende ser um direito comum e universal de proteção ao consumidor, fundado em métodos, conceitos, institutos, normas e princípios jurídicos universais. O DIC dialogará com outros ramos do Direito Público e Privado, especialmente o Direito Internacional Econômico, o Direito Internacional do Comércio, o Direito Internacional Privado, o Direito Processual Civil Internacional, e o Direito do Consumidor. Pretende-se com isto atender ao ideal de promover o livre comércio internacional com respeito aos Direitos Humanos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Low Energy Telescopes on the Voyager spacecraft are used to measure the elemental composition (2 ≤ Z ≤ 28) and energy spectra (5 to 15 MeV /nucleon) of solar energetic particles (SEPs) in seven large flare events. Four flare events are selected which have SEP abundance ratios approximately independent of energy/nucleon. The abundances for these events are compared from flare to flare and are compared to solar abundances from other sources: spectroscopy of the photosphere and corona, and solar wind measurements.

The selected SEP composition results may be described by an average composition plus a systematic flare-to-flare deviation about the average. For each of the four events, the ratios of the SEP abundances to the four-flare average SEP abundances are approximately monotonic functions of nuclear charge Z in the range 6 ≤ Z ≤ 28. An exception to this Z-dependent trend occurs for He, whose abundance relative to Si is nearly the same in all four events.

The four-flare average SEP composition is significantly different from the solar composition determined by photospheric spectroscopy: The elements C, N and O are depleted in SEPs by a factor of about five relative to the elements Na, Mg, Al, Si, Ca, Cr, Fe and Ni. For some elemental abundance ratios (e.g. Mg/O), the difference between SEP and photospheric results is persistent from flare to flare and is apparently not due to a systematic difference in SEP energy/nucleon spectra between the elements, nor to propagation effects which would result in a time-dependent abundance ratio in individual flare events.

The four-flare average SEP composition is in agreement with solar wind abundance results and with a number of recent coronal abundance measurements. The evidence for a common depletion of oxygen in SEPs, the corona and the solar wind relative to the photosphere suggests that the SEPs originate in the corona and that both the SEPs and solar wind sample a coronal composition which is significantly and persistently different from that of the photosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The access of 1.2-40 MeV protons and 0.4-1.0 MeV electrons from interplanetary space to the polar cap regions has been investigated with an experiment on board a low altitude, polar orbiting satellite (OG0-4).

A total of 333 quiet time observations of the electron polar cap boundary give a mapping of the boundary between open and closed geomagnetic field lines which is an order of magnitude more comprehensive than previously available.

Persistent features (north/south asymmetries) in the polar cap proton flux, which are established as normal during solar proton events, are shown to be associated with different flux levels on open geomagnetic field lines than on closed field lines. The pole in which these persistent features are observed is strongly correlated to the sector structure of the interplanetary magnetic field and uncorrelated to the north/south component of this field. The features were observed in the north (south) pole during a negative (positive) sector 91% of the time, while the solar field had a southward component only 54% of the time. In addition, changes in the north/south component have no observable effect on the persistent features.

Observations of events associated with co-rotating regions of enhanced proton flux in interplanetary space are used to establish the characteristics of the 1.2 - 40 MeV proton access windows: the access window for low polar latitudes is near the earth, that for one high polar latitude region is ~250 R behind the earth, while that for the other high polar latitude region is ~1750 R behind the earth. All of the access windows are of approximately the same extent (~120 R). The following phenomena contribute to persistent polar cap features: limited interplanetary regions of enhanced flux propagating past the earth, radial gradients in the interplanetary flux, and anisotropies in the interplanetary flux.

These results are compared to the particle access predictions of the distant geomagnetic tail configurations proposed by Michel and Dessler, Dungey, and Frank. The data are consistent with neither the model of Michel and Dessler nor that of Dungey. The model of Frank can yield a consistent access window configuration provided the following constraints are satisfied: the merging rate for open field lines at one polar neutral point must be ~5 times that at the other polar neutral point, related to the solar magnetic field configuration in a consistent fashion, the migration time for open field lines to move across the polar cap region must be the same in both poles, and the open field line merging rate at one of the polar neutral points must be at least as large as that required for almost all the open field lines to have merged in 0 (one hour). The possibility of satisfying these constraints is investigated in some detail.

The role played by interplanetary anisotropies in the observation of persistent polar cap features is discussed. Special emphasis is given to the problem of non-adiabatic particle entry through regions where the magnetic field is changing direction. The degree to which such particle entry can be assumed to be nearly adiabatic is related to the particle rigidity, the angle through which the field turns, and the rate at which the field changes direction; this relationship is established for the case of polar cap observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Atenção Primária à Saúde - APS é reconhecida como o nível fundamental e porta de entrada do sistema de atenção à saúde, sendo o lugar adequado onde pode ser atendida e resolvida a maior parte dos problemas de saúde. É considerada pela OMS como a principal proposta de modelo assistencial. Essa importância da APS leva a necessidade de pesquisas avaliativas dos seus resultados para adequação e melhoria de políticas e planos de ação delineados em relação à mesma. Pesquisas internacionais e nacionais são realizadas, nas quais indicadores relativos às atividades hospitalares estão sendo empregados com o objetivo de medir resultados como efetividade e acesso da APS. Um desses indicadores, desenvolvido por John Billings da Universidade de Nova York, na década de 90, consiste nas condições pelas quais as internações hospitalares por Condições Sensíveis à Atenção Ambulatorial (CSAA) deveriam ser evitadas caso os serviços da APS fossem efetivos e acessíveis. Utilizando-se o SIH-AIH/2008 e a lista brasileira de Internações por Condições Sensíveis a Atenção Primária, publicada em 2008, a proposta do presente trabalho é a de estudar os cuidados primários à saúde baseando-se nas ICSAA, na área urbana da cidade de Juiz de Fora-MG. Buscou-se responder sobre os efeitos que ocorrem nessas internações a partir das características individuais dos pacientes, das características das Unidades Básicas de Saúde-UBS (infraestrutura, produção e modelos assistenciais) e das condições sócio-econômicas/ambientais das áreas cobertas por UAPS e descobertas (sem UAPS), com a utilização de modelos multiníveis logísticos com intercepto aleatório. Buscou-se conhecer, também, a distribuição espacial das taxas padronizadas por idade das ICSAA nessas áreas e suas associações com as variáveis contextuais, utilizando-se ferramentas da análise espacial. Os resultados do presente trabalho mostraram que a porcentagem de internações por CSAA, foi de 4,1%. Os modelos assistenciais ESF e o Modelo Tradicional, base da organização da atenção primária no Brasil, não apresentaram no município, impacto significativo nas ICSAA, somente na forma de áreas descobertas tendo como referência as áreas cobertas. Também não foram significativas as variáveis de infraestrutura e produção das UAPS. Os efeitos individuais (idade e sexo) nas ICSAA foram significativos, apresentando probabilidades de significância menores que 1%, o mesmo acontecendo com o Índice de Desenvolvimento Social-IDS, que contempla as condições sociais, econômicas e ambientais das áreas analisadas. A distribuição espacial das taxas padronizadas por idade apresentou padrão aleatório e os testes dos Multiplicadores de Lagrange não foram significativos indicando o modelo de regressão clássico (MQO) como adequado para explicar as taxas em função das variáveis contextuais. Para a análise conjunta das áreas cobertas e descobertas foram fatores de risco: a variável econômica (% dos domicílios com renda até 2 SM), áreas descobertas tendo como referência as áreas cobertas e a região nordeste do município. Para as áreas cobertas as variáveis de produção das UAPS, econômica e a região nordeste apresentaram como fator de risco para as taxas de internação por CSAA.