961 resultados para Zero reference level


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study an intertemporal asset pricing model in which a representative consumer maximizes expected utility derived from both the ratio of his consumption to some reference level and this level itself. If the reference consumption level is assumed to be determined by past consumption levels, the model generalizes the usual habit formation specifications. When the reference level growth rate is made dependent on the market portfolio return and on past consumption growth, the model mixes a consumption CAPM with habit formation together with the CAPM. It therefore provides, in an expected utility framework, a generalization of the non-expected recursive utility model of Epstein and Zin (1989). When we estimate this specification with aggregate per capita consumption, we obtain economically plausible values of the preference parameters, in contrast with the habit formation or the Epstein-Zin cases taken separately. All tests performed with various preference specifications confirm that the reference level enters significantly in the pricing kernel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human exposure to Bisphenol A (BPA) results mainly from ingestion of food and beverages. Information regarding BPA effects on colon cancer, one of the major causes of death in developed countries, is still scarce. Likewise, little is known about BPA drug interactions although its potential role in doxorubicin (DOX) chemoresistance has been suggested. This study aims to assess potential interactions between BPA and DOX on HT29 colon cancer cells. HT29 cell response was evaluated after exposure to BPA, DOX, or co-exposure to both chemicals. Transcriptional analysis of several cancer-associated genes (c-fos, AURKA, p21, bcl-xl and CLU) shows that BPA exposure induces slight up-regulation exclusively of bcl-xl without affecting cell viability. On the other hand, a sub-therapeutic DOX concentration (40 nM) results in highly altered c-fos, bcl-xl, and CLU transcript levels, and this is not affected by co-exposure with BPA. Conversely, DOX at a therapeutic concentration (4 μM) results in distinct and very severe transcriptional alterations of c-fos, AURKA, p21 and CLU that are counteracted by co-exposure with BPA resulting in transcript levels similar to those of control. Co-exposure with BPA slightly decreases apoptosis in relation to DOX 4 μM alone without affecting DOX-induced loss of cell viability. These results suggest that BPA exposure can influence chemotherapy outcomes and therefore emphasize the necessity of a better understanding of BPA interactions with chemotherapeutic agents in the context of risk assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The authors investigated how different levels of detail (LODs) of a virtual throwing action can influence a handball goalkeeper's motor response. Goalkeepers attempted to stop a virtual ball emanating from five different graphical LODs of the same virtual throwing action. The five levels of detail were: a textured reference level (L0), a non-textured level (L1), a wire-frame level (L2), a point-light-display (PLD) representation (L3) and a PLD level with reduced ball size (L4). For each motor response made by the goalkeeper we measured and analyzed the time to respond (TTR), the percentage of successful motor responses, the distance between the ball and the closest limb (when the stopping motion was incorrect) and the kinematics of the motion. Results showed that TTR, percentage of successful motor responses and distance with the closest limb were not significantly different for any of the five different graphical LODs. However the kinematics of the motion revealed that the trajectory of the stopping limb was significantly different when comparing the L1 and L3 levels, and when comparing the L1 and L4 levels. These differences in the control of the goalkeeper's actions suggests that the different level of information available in the PLD representations ( L3 and L4) are causing the goalkeeper to adopt different motor strategies to control the approach of their limb to stop the ball.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Apresentamos três novos métodos estáveis de inversão gravimétrica para estimar o relevo de uma interface arbitrária separando dois meios. Para a garantia da estabilidade da solução, introduzimos informações a priori sobre a interface a ser mapeada, através da minimização de um (ou mais) funcional estabilizante. Portanto, estes três métodos se diferenciam pelos tipos de informação físico-geológica incorporados. No primeiro método, denominado suavidade global, as profundidades da interface são estimadas em pontos discretos, presumindo-se o conhecimento a priori sobre o contraste de densidade entre os meios. Para a estabilização do problema inverso introduzimos dois vínculos: (a) proximidade entre as profundidades estimadas e verdadeiras da interface em alguns pontos fornecidas por furos de sondagem; e (b) proximidade entre as profundidades estimadas em pontos adjacentes. A combinação destes dois vínculos impõe uma suavidade uniforme a toda interface estimada, minimizando, simultaneamente em alguns pontos, os desajustes entre as profundidades conhecidas pelas sondagens e as estimadas nos mesmos pontos. O segundo método, denominado suavidade ponderada, estima as profundidades da interface em pontos discretos, admitindo o conhecimento a priori do contraste de densidade. Neste método, incorpora-se a informação geológica que a interface é suave, exceto em regiões de descontinuidades produzidas por falhas, ou seja, a interface é predominantemente suave porém localmente descontínua. Para a incorporação desta informação, desenvolvemos um processo iterativo em que três tipos de vínculos são impostos aos parâmetros: (a) ponderação da proximidade entre as profundidades estimadas em pontos adjacentes; (b) limites inferior e superior para as profundidades; e (c) proximidade entre todas as profundidades estimadas e um valor numérico conhecido. Inicializando com a solução estimada pelo método da suavidade global, este segundo método, iterativamente, acentua as feições geométricas presentes na solução inicial; ou seja, regiões suaves da interface tendem a tornar-se mais suaves e regiões abruptas tendem a tornar-se mais abruptas. Para tanto, este método atribui diferentes pesos ao vínculo de proximidade entre as profundidades adjacentes. Estes pesos são automaticamente atualizados de modo a acentuar as descontinuidades sutilmente detectadas pela solução da suavidade global. Os vínculos (b) e (c) são usados para compensar a perda da estabilidade, devida à introdução de pesos próximos a zero em alguns dos vínculos de proximidade entre parâmetros adjacentes, e incorporar a informação a priori que a região mais profunda da interface apresenta-se plana e horizontal. O vínculo (b) impõe, de modo estrito, que qualquer profundidade estimada é não negativa e menor que o valor de máxima profundidade da interface conhecido a priori; o vínculo (c) impõe que todas as profundidades estimadas são próximas a um valor que deliberadamente viola a profundidade máxima da interface. O compromisso entre os vínculos conflitantes (b) e (c) resulta na tendenciosidade da solução final em acentuar descontinuidades verticais e apresentar uma estimativa suave e achatada da região mais profunda. O terceiro método, denominado mínimo momento de inércia, estima os contrastes de densidade de uma região da subsuperfície discretizada em volumes elementares prismáticos. Este método incorpora a informação geológica que a interface a ser mapeada delimita uma fonte anômala que apresenta dimensões horizontais maiores que sua maior dimensão vertical, com bordas mergulhando verticalmente ou em direção ao centro de massa e que toda a massa (ou deficiência de massa) anômala está concentrada, de modo compacto, em torno de um nível de referência. Conceitualmente, estas informações são introduzidas pela minimização do momento de inércia das fontes em relação ao nível de referência conhecido a priori. Esta minimização é efetuada em um subespaço de parâmetros consistindo de fontes compactas e apresentando bordas mergulhando verticalmente ou em direção ao centro de massa. Efetivamente, estas informações são introduzidas através de um processo iterativo inicializando com uma solução cujo momento de inércia é próximo a zero, acrescentando, em cada iteração, uma contribuição com mínimo momento de inércia em relação ao nível de referência, de modo que a nova estimativa obedeça a limites mínimo e máximo do contraste de densidade, e minimize, simultaneamente, os desajustes entre os dados gravimétricos observados e ajustados. Adicionalmente, o processo iterativo tende a "congelar" as estimativas em um dos limites (mínimo ou máximo). O resultado final é uma fonte anômala compactada em torno do nível de referência cuja distribuição de constraste de densidade tende ao limite superior (em valor absoluto) estabelecido a priori. Estes três métodos foram aplicados a dados sintéticos e reais produzidos pelo relevo do embasamento de bacias sedimentares. A suavidade global produziu uma boa reconstrução do arcabouço de bacias que violam a condição de suavidade, tanto em dados sintéticos como em dados da Bacia do Recôncavo. Este método, apresenta a menor resolução quando comparado com os outros dois métodos. A suavidade ponderada produziu uma melhoria na resolução de relevos de embasamentos que apresentam falhamentos com grandes rejeitos e altos ângulos de mergulho, indicando uma grande potencialidade na interpretação do arcabouço de bacias extensionais, como mostramos em testes com dados sintéticos e dados do Steptoe Valley, Nevada, EUA, e da Bacia do Recôncavo. No método do mínimo momento de inércia, tomou-se como nível de referência o nível médio do terreno. As aplicações a dados sintéticos e às anomalias Bouguer do Graben de San Jacinto, California, EUA, e da Bacia do Recôncavo mostraram que, em comparação com os métodos da suavidade global e ponderada, este método estima com excelente resolução falhamentos com pequenos rejeitos sem impor a restrição da interface apresentar poucas descontinuidades locais, como no método da suavidade ponderada.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Physiological data obtained with the pulmonary artery catheter (PAC) are susceptible to errors in measurement and interpretation. Little attention has been paid to the relevance of errors in hemodynamic measurements performed in the intensive care unit (ICU). The aim of this study was to assess the errors related to the technical aspects (zeroing and reference level) and actual measurement (curve interpretation) of the pulmonary artery occlusion pressure (PAOP). METHODS: Forty-seven participants in a special ICU training program and 22 ICU nurses were tested without pre-announcement. All participants had previously been exposed to the clinical use of the method. The first task was to set up a pressure measurement system for PAC (zeroing and reference level) and the second to measure the PAOP. RESULTS: The median difference from the reference mid-axillary zero level was - 3 cm (-8 to + 9 cm) for physicians and -1 cm (-5 to + 1 cm) for nurses. The median difference from the reference PAOP was 0 mmHg (-3 to 5 mmHg) for physicians and 1 mmHg (-1 to 15 mmHg) for nurses. When PAOP values were adjusted for the differences from the reference transducer level, the median differences from the reference PAOP values were 2 mmHg (-6 to 9 mmHg) for physicians and 2 mmHg (-6 to 16 mmHg) for nurses. CONCLUSIONS: Measurement of the PAOP is susceptible to substantial error as a result of practical mistakes. Comparison of results between ICUs or practitioners is therefore not possible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presented database contains time-referenced sea ice draft values from upward looking sonar (ULS) measurements in the Weddell Sea, Antarctica. The sea ice draft data can be used to infer the thickness of the ice. They were collected during the period 1990-2008. In total, the database includes measurements from 13 locations in the Weddell Sea and was generated from more than 3.7 million measurements of sea ice draft. The files contain uncorrected raw drafts, corrected drafts and the basic parameters measured by the ULS. The measurement principle, the data processing procedure and the quality control are described in detail. To account for the unknown speed of sound in the water column above the ULS, two correction methods were applied to the draft data. The first method is based on defining a reference level from the identification of open water leads. The second method uses a model of sound speed in the oceanic mixed layer and is applied to ice draft in austral winter. Both methods are discussed and their accuracy is estimated. Finally, selected results of the processing are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A clear demonstration of topological superconductivity (TS) and Majorana zero modes remains one of the major pending goals in the field of topological materials. One common strategy to generate TS is through the coupling of an s-wave superconductor to a helical half-metallic system. Numerous proposals for the latter have been put forward in the literature, most of them based on semiconductors or topological insulators with strong spin-orbit coupling. Here, we demonstrate an alternative approach for the creation of TS in graphene-superconductor junctions without the need for spin-orbit coupling. Our prediction stems from the helicity of graphene’s zero-Landau-level edge states in the presence of interactions and from the possibility, experimentally demonstrated, of tuning their magnetic properties with in-plane magnetic fields. We show how canted antiferromagnetic ordering in the graphene bulk close to neutrality induces TS along the junction and gives rise to isolated, topologically protected Majorana bound states at either end. We also discuss possible strategies to detect their presence in graphene Josephson junctions through Fraunhofer pattern anomalies and Andreev spectroscopy. The latter, in particular, exhibits strong unambiguous signatures of the presence of the Majorana states in the form of universal zero-bias anomalies. Remarkable progress has recently been reported in the fabrication of the proposed type of junctions, which offers a promising outlook for Majorana physics in graphene systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this manuscript, we propose a criterion for a weakly bound complex formed in a supersonic beam to be characterized as a `hydrogen bonded complex'. For a `hydrogen bonded complex', the zero point energy along any large amplitude vibrational coordinate that destroys the orientational preference for the hydrogen bond should be significantly below the barrier along that coordinate so that there is at least one bound level. These are vibrational modes that do not lead to the breakdown of the complex as a whole. If the zero point level is higher than the barrier, the `hydrogen bond' would not be able to stabilize the orientation which favors it and it is no longer sensible to characterize a complex as hydrogen bonded. Four complexes, Ar-2-H2O, Ar-2-H2S, C2H4-H2O and C2H4-H2S, were chosen for investigations. Zero point energies and barriers for large amplitude motions were calculated at a reasonable level of calculation, MP2(full)/aug-cc-pVTZ, for all these complexes. Atoms in molecules (AIM) theoretical analyses of these complexes were carried out as well. All these complexes would be considered hydrogen bonded according to the AIM theoretical criteria suggested by Koch and Popelier for C-H center dot center dot center dot O hydrogen bonds (U. Koch and P. L. A. Popelier, J. Phys. Chem., 1995, 99, 9747), which has been widely and, at times, incorrectly used for all types of contacts involving H. It is shown that, according to the criterion proposed here, the Ar-2-H2O/H2S complexes are not hydrogen bonded even at zero kelvin and C2H4-H2O/H2S complexes are. This analysis can naturally be extended to all temperatures. It can explain the recent experimental observations on crystal structures of H2S at various conditions and the crossed beam scattering studies on rare gases with H2O and H2S.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantum mechanics places limits on the minimum energy of a harmonic oscillator via the ever-present "zero-point" fluctuations of the quantum ground state. Through squeezing, however, it is possible to decrease the noise of a single motional quadrature below the zero-point level as long as noise is added to the orthogonal quadrature. While squeezing below the quantum noise level was achieved decades ago with light, quantum squeezing of the motion of a mechanical resonator is a more difficult prospect due to the large thermal occupations of megahertz-frequency mechanical devices even at typical dilution refrigerator temperatures of ~ 10 mK.

Kronwald, Marquardt, and Clerk (2013) propose a method of squeezing a single quadrature of mechanical motion below the level of its zero-point fluctuations, even when the mechanics starts out with a large thermal occupation. The scheme operates under the framework of cavity optomechanics, where an optical or microwave cavity is coupled to the mechanics in order to control and read out the mechanical state. In the proposal, two pump tones are applied to the cavity, each detuned from the cavity resonance by the mechanical frequency. The pump tones establish and couple the mechanics to a squeezed reservoir, producing arbitrarily-large, steady-state squeezing of the mechanical motion. In this dissertation, I describe two experiments related to the implementation of this proposal in an electromechanical system. I also expand on the theory presented in Kronwald et. al. to include the effects of squeezing in the presence of classical microwave noise, and without assumptions of perfect alignment of the pump frequencies.

In the first experiment, we produce a squeezed thermal state using the method of Kronwald et. al.. We perform back-action evading measurements of the mechanical squeezed state in order to probe the noise in both quadratures of the mechanics. Using this method, we detect single-quadrature fluctuations at the level of 1.09 +/- 0.06 times the quantum zero-point motion.

In the second experiment, we measure the spectral noise of the microwave cavity in the presence of the squeezing tones and fit a full model to the spectrum in order to deduce a quadrature variance of 0.80 +/- 0.03 times the zero-point level. These measurements provide the first evidence of quantum squeezing of motion in a mechanical resonator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES]En este documento se presenta el Trabajo Fin de Grado llevado a cabo con el objetivo de realizar el estudio de la metodología adecuada para medir la exposición electromagnética (EM) de sistemas LTE (Long Term Evolution). La exposición EM es un aspecto importante para la población, la cual es consciente de los potenciales efectos que pueden causar dichas radiaciones. Para alcanzar los objetivos de este trabajo, se ha analizado el estado del arte asociado a dicho sistema, con el fin de distinguir el comportamiento de la señal LTE y así, realizar el estudio de los parámetros relevantes en la configuración de los equipos. Se ha logrado definir una metodología que permite diferenciar adecuadamente los momentos de actividad y no actividad, observándose las franjas horarias asociadas. Además, empleando la metodología definida, se han realizado medidas de la exposición en la Escuela Técnica Superior de Ingeniería de Bilbao comparando los datos obtenidos con el nivel de referencia permitido para validar el cumplimiento de la normativa vigente.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The water circulation of the Egyptian Mediterranean waters was computed during winter and summer seasons using the dynamic method. The reference level was set at the 1000db surface. The results showed that the surface circulation is dominated by the Atlantic water inflow along the North African coast and by two major gyres, the Mersa Matruth anticyclonic gyre and El-Arish cyclonic gyre. The results showed a seasonal reversal of El-Arish gyre, being cyclonic in winter and anticyclonic in summer. El-Arish gyre had not been previously measured. The geostrophic current velocity at the edges of the Mersa Matruth gyre varied between 12.5 and 29.1cm/sec in winter and between 6.5 and 13.1cm/sec in summer. The current velocity reached its maximum values (>40cm/sec) at El-Arish gyre. The current velocity at the two gyres decreased with increasing depth. The North African Current affects the surface waters down to a depth of 100m, and that its mean velocity varies between 6 and 38cm/sec.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to guarantee a sustainable supply of future energy demand without compromising the environment, some actions for a substantial reduction of CO 2 emissions are nowadays deeply analysed. One of them is the improvement of the nuclear energy use. In this framework, innovative gas-cooled reactors (both thermal and fast) seem to be very attractive from the electricity production point of view and for the potential industrial use along the high temperature processes (e.g., H 2 production by steam reforming or I-S process). This work focuses on a preliminary (and conservative) evaluation of possible advantages that a symbiotic cycle (EPR-PBMR-GCFR) could entail, with special regard to the reduction of the HLW inventory and the optimization of the exploitation of the fuel resources. The comparison between the symbiotic cycle chosen and the reference one (once-through scenario, i.e., EPR-SNF directly disposed) shows a reduction of the time needed to reach a fixed reference level from ∼170000 years to ∼1550 years (comparable with typical human times and for this reason more acceptable by the public opinion). In addition, this cycle enables to have a more efficient use of resources involved: the total electric energy produced becomes equal to ∼630 TWh/year (instead of only ∼530 TWh/year using only EPR) without consuming additional raw materials. © 2009 Barbara Vezzoni et al.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using an all-electron band structure approach, we have systematically calculated the natural band offsets between all group IV, III-V, and II-VI semiconductor compounds, taking into account the deformation potential of the core states. This revised approach removes assumptions regarding the reference level volume deformation and offers a more reliable prediction of the "natural" unstrained offsets. Comparison is made to experimental work, where a noticeable improvement is found compared to previous methodologies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study we describe the velocity structure and transport of the North Equatorial Current (NEC), the Kuroshio, and the Mindanao Current (MC) using repeated hydrographic sections near the Philippine coast. A most striking feature of the current system in the region is the undercurrent structure below the surface flow. Both the Luzon Undercurrent and the Mindanao Undercurrent appear to be permanent phenomena. The present data set also provides an estimate of the mean circulation diagram (relative to 1500 dbar) that involves a NEC transport of 41 Sverdrups (Sv), a Kuroshio transport of 14 Sv, and a MC transport of 27 Sv, inducing a mass balance better than 1 Sv within the region enclosed by stations. The circulation diagram is insensitive to vertical displacements of the reference level within the depth range between 1500 and 2500 dbar. Transport fluctuations are, in general, consistent with earlier observations; that is, the NEC and the Kuroshio vary in the same phase with a seasonal signal superimposed with interannual variations, and the transport of the MC is dominated by a quasi-biennial oscillation. Dynamic height distributions are also examined to explore the dynamics of the current system.