974 resultados para time-varying channel


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans le contexte de la caractérisation des tissus mammaires, on peut se demander ce que l’examen d’un attribut en échographie quantitative (« quantitative ultrasound » - QUS) d’un milieu diffusant (tel un tissu biologique mou) pendant la propagation d’une onde de cisaillement ajoute à son pouvoir discriminant. Ce travail présente une étude du comportement variable temporel de trois paramètres statistiques (l’intensité moyenne, le paramètre de structure et le paramètre de regroupement des diffuseurs) d’un modèle général pour l’enveloppe écho de l’onde ultrasonore rétrodiffusée (c.-à-d., la K-distribution homodyne) sous la propagation des ondes de cisaillement. Des ondes de cisaillement transitoires ont été générés en utilisant la mèthode d’ imagerie de cisaillement supersonique ( «supersonic shear imaging » - SSI) dans trois fantômes in-vitro macroscopiquement homogènes imitant le sein avec des propriétés mécaniques différentes, et deux fantômes ex-vivo hétérogénes avec tumeurs de souris incluses dans un milieu environnant d’agargélatine. Une comparaison de l’étendue des trois paramètres de la K-distribution homodyne avec et sans propagation d’ondes de cisaillement a montré que les paramètres étaient significativement (p < 0,001) affectès par la propagation d’ondes de cisaillement dans les expériences in-vitro et ex-vivo. Les résultats ont également démontré que la plage dynamique des paramétres statistiques au cours de la propagation des ondes de cisaillement peut aider à discriminer (avec p < 0,001) les trois fantômes homogènes in-vitro les uns des autres, ainsi que les tumeurs de souris de leur milieu environnant dans les fantômes hétérogénes ex-vivo. De plus, un modéle de régression linéaire a été appliqué pour corréler la plage de l’intensité moyenne sous la propagation des ondes de cisaillement avec l’amplitude maximale de déplacement du « speckle » ultrasonore. La régression linéaire obtenue a été significative : fantômes in vitro : R2 = 0.98, p < 0,001 ; tumeurs ex-vivo : R2 = 0,56, p = 0,013 ; milieu environnant ex-vivo : R2 = 0,59, p = 0,009. En revanche, la régression linéaire n’a pas été aussi significative entre l’intensité moyenne sans propagation d’ondes de cisaillement et les propriétés mécaniques du milieu : fantômes in vitro : R2 = 0,07, p = 0,328, tumeurs ex-vivo : R2 = 0,55, p = 0,022 ; milieu environnant ex-vivo : R2 = 0,45, p = 0,047. Cette nouvelle approche peut fournir des informations supplémentaires à l’échographie quantitative statistique traditionnellement réalisée dans un cadre statique (c.-à-d., sans propagation d’ondes de cisaillement), par exemple, dans le contexte de l’imagerie ultrasonore en vue de la classification du cancer du sein.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present study described about the interaction of a two level atom and squeezed field with time varying frequency. By applying a sinusoidal variation in the frequency of the field, the randomness in population inversion is reduced and the collapses and periodic revivals are regained. Quantum optics is an emerging field in physics which mainly deals with the interaction of atoms with quantised electromagnetic fields. Jaynes-Cummings Model (JCM) is a key model among them, which describes the interaction between a two level atom and a single mode radiation field. Here the study begins with a brief history of light, atom and their interactions. Also discussed the interaction between atoms and electromagnetic fields. The study suggest a method to manipulate the population inversion due to interaction and control the randomness in it, by applying a time dependence on the frequency of the interacting squeezed field.The change in behaviour of the population inversion due to the presence of a phase factor in the applied frequency variation is explained here.This study also describes the interaction between two level atom and electromagnetic field in nonlinear Kerr medium. It deals with atomic and field state evolution in a coupled cavity system. Our results suggest a new method to control and manipulate the population of states in two level atom radiation interaction,which is very essential for quantum information processing.We have also studied the variation of atomic population inversion with time, when a two level atom interacts with light field, where the light field has a sinusoidal frequency variation with a constant phase. In both coherent field and squeezed field cases, the population inversion variation is completely different from the phase zero frequency modulation case. It is observed that in the presence of a non zero phase φ, the population inversion oscillates sinusoidally.Also the collapses and revivals gradually disappears when φ increases from 0 to π/2. When φ = π/2 the evolution of population inversion is identical to the case when a two level atom interacts with a Fock state. Thus, by applying a phase shifted frequency modulation one can induce sinusoidal oscillations of atomic inversion in linear medium, those normally observed in Kerr medium. We noticed that the entanglement between the atom and field can be controlled by varying the period of the field frequency fluctuations. The system has been solved numerically and the behaviour of it for different initial conditions and different susceptibility values are analysed. It is observed that, for weak cavity coupling the effect of susceptibility is minimal. In cases of strong cavity coupling, susceptibility factor modifies the nature in which the probability oscillates with time. Effect of susceptibility on probability of states is closely related to the initial state of the system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers a connection between the deterministic and noisy behavior of nonlinear networks. Specifically, a particular bridge circuit is examined which has two possibly nonlinear energy storage elements. By proper choice of the constitutive relations for the network elements, the deterministic terminal behavior reduces to that of a single linear resistor. This reduction of the deterministic terminal behavior, in which a natural frequency of a linear circuit does not appear in the driving-point impedance, has been shown in classical circuit theory books (e.g. [1, 2]). The paper shows that, in addition to the reduction of the deterministic behavior, the thermal noise at the terminals of the network, arising from the usual Nyquist-Johnson noise model associated with each resistor in the network, is also exactly that of a single linear resistor. While this result for the linear time-invariant (LTI) case is a direct consequence of a well-known result for RLC circuits, the nonlinear result is novel. We show that the terminal noise current is precisely that predicted by the Nyquist-Johnson model for R if the driving voltage is zero or constant, but not if the driving voltage is time-dependent or the inductor and capacitor are time-varying

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The H∞ synchronization problem of the master and slave structure of a second-order neutral master-slave systems with time-varying delays is presented in this paper. Delay-dependent sufficient conditions for the design of a delayed output-feedback control are given by Lyapunov-Krasovskii method in terms of a linear matrix inequality (LMI). A controller, which guarantees H∞ synchronization of the master and slave structure using some free weighting matrices, is then developed. A numerical example has been given to show the effectiveness of the method

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La valoración de una empresa como sistema dinámico es bastante compleja, los diferentes modelos o métodos de valoración son una aproximación teórica y por consiguiente simplificadora de la realidad. Dichos modelos, se aproximan mediante supuestos o premisas estadísticas que nos permiten hacer dicha simplificación, ejemplos de estos, son el comportamiento del inversionista o la eficiencia del mercado. Bajo el marco de un mercado emergente, este proceso presenta de indistinta forma retos paracualquier método de valoración, dado a que el mercado no obedece a los paradigmas tradicionales. Lo anterior hace referencia a que la valoración es aún más compleja, dado que los inversionistas se enfrentan a mayores riesgos y obstáculos. Así mismo, a medida que las economías se globalizan y el capital es más móvil, la valoración tomaráaún más importancia en el contexto citado. Este trabajo de gradopretende recopilar y analizar los diferentes métodos de valoración, además de identificar y aplicar aquellos que se reconocen como “buenas prácticas”. Este proceso se llevó a cabo para una de las empresas más importantes de Colombia, donde fundamentalmente se consideró el contexto de mercado emergente y específicamente el sector petrolero, como criterios para la aplicación del tradicional DCF y el práctico R&V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Se presenta el análisis de sensibilidad de un modelo de percepción de marca y ajuste de la inversión en marketing desarrollado en el Laboratorio de Simulación de la Universidad del Rosario. Este trabajo de grado consta de una introducción al tema de análisis de sensibilidad y su complementario el análisis de incertidumbre. Se pasa a mostrar ambos análisis usando un ejemplo simple de aplicación del modelo mediante la aplicación exhaustiva y rigurosa de los pasos descritos en la primera parte. Luego se hace una discusión de la problemática de medición de magnitudes que prueba ser el factor más complejo de la aplicación del modelo en el contexto práctico y finalmente se dan conclusiones sobre los resultados de los análisis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La estimación e interpretación de la estructura a plazo de la tasas de interés es de gran relevancia porque permite realizar pronósticos, es fundamental para la toma de decisiones de política monetaria y fiscal, es esencial en la administración de riesgos y es insumo para la valoración de diferentes activos financieros. Por estas razones, es necesario entender que puede provocar un movimiento en la estructura a plazo. En este trabajo se estiman un modelo afín exponencial de tres factores aplicado a los rendimientos de los títulos en pesos de deuda pública colombianos. Los factores estimados son la tasa corta, la media de largo plazo de la tasa corta y la volatilidad de la tasa corta. La estimación se realiza para el periodo enero 2010 a mayo de 2015 y se realiza un análisis de correlaciones entre los tres factores. Posterior a esto, con los factores estimados se realiza una regresión para identificar la importancia que tiene cada uno de estos en el comportamiento de las tasas de los títulos de deuda pública colombiana para diferentes plazos al vencimiento. Finalmente, se estima la estructura a plazo de las tasas de interés para Colombia y se identifica la relación de los factores estimados con los encontrados por Litterman y Scheinkman [1991] correspondientes al nivel, pendiente y curvatura.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The clustering in time (seriality) of extratropical cyclones is responsible for large cumulative insured losses in western Europe, though surprisingly little scientific attention has been given to this important property. This study investigates and quantifies the seriality of extratropical cyclones in the Northern Hemisphere using a point-process approach. A possible mechanism for serial clustering is the time-varying effect of the large-scale flow on individual cyclone tracks. Another mechanism is the generation by one parent cyclone of one or more offspring through secondary cyclogenesis. A long cyclone-track database was constructed for extended October March winters from 1950 to 2003 using 6-h analyses of 850-mb relative vorticity derived from the NCEP NCAR reanalysis. A dispersion statistic based on the varianceto- mean ratio of monthly cyclone counts was used as a measure of clustering. It reveals extensive regions of statistically significant clustering in the European exit region of the North Atlantic storm track and over the central North Pacific. Monthly cyclone counts were regressed on time-varying teleconnection indices with a log-linear Poisson model. Five independent teleconnection patterns were found to be significant factors over Europe: the North Atlantic Oscillation (NAO), the east Atlantic pattern, the Scandinavian pattern, the east Atlantic western Russian pattern, and the polar Eurasian pattern. The NAO alone is not sufficient for explaining the variability of cyclone counts in the North Atlantic region and western Europe. Rate dependence on time-varying teleconnection indices accounts for the variability in monthly cyclone counts, and a cluster process did not need to be invoked.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The response of a uniform horizontal temperature gradient to prescribed fixed heating is calculated in the context of an extended version of surface quasigeostrophic dynamics. It is found that for zero mean surface flow and weak cross-gradient structure the prescribed heating induces a mean temperature anomaly proportional to the spatial Hilbert transform of the heating. The interior potential vorticity generated by the heating enhances this surface response. The time-varying part is independent of the heating and satisfies the usual linearized surface quasigeostrophic dynamics. It is shown that the surface temperature tendency is a spatial Hilbert transform of the temperature anomaly itself. It then follows that the temperature anomaly is periodically modulated with a frequency proportional to the vertical wind shear. A strong local bound on wave energy is also found. Reanalysis diagnostics are presented that indicate consistency with key findings from this theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A reconstruction of the Atlantic Meridional Overturning Circulation (MOC) for the period 1959–2006 has been derived from the ECMWF operational ocean reanalysis. The reconstruction shows a wide range of time-variability, including a downward trend. At 26N, both the MOC intensity and changes in its vertical structure are in good agreement with previous estimates based on trans-Atlantic surveys. At 50N, the MOC and strength of the subpolar gyre are correlated at interannual time scales, but show opposite secular trends. Heat transport variability is highly correlated with the MOC but shows a smaller trend due to the warming of the upper ocean, which partially compensates for the weakening of the circulation. Results from sensitivity experiments show that although the time-varying upper boundary forcing provides useful MOC information, the sequential assimilation of ocean data further improves the MOC estimation by increasing both the mean and the time variability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the imposition of Dirichlet boundary conditions in the finite element modelling of moving boundary problems in one and two dimensions for which the total mass is prescribed. A modification of the standard linear finite element test space allows the boundary conditions to be imposed strongly whilst simultaneously conserving a discrete mass. The validity of the technique is assessed for a specific moving mesh finite element method, although the approach is more general. Numerical comparisons are carried out for mass-conserving solutions of the porous medium equation with Dirichlet boundary conditions and for a moving boundary problem with a source term and time-varying mass.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigated diurnal nitrate (NO3-) concentration variability in the San Joaquin River using an in situ optical NO3- sensor and discrete sampling during a 5-day summer period characterized by high algal productivity. Dual NO3- isotopes (delta N-15(NO3) and delta O-18(NO3)) and dissolved oxygen isotopes (delta O-18(DO)) were measured over 2 days to assess NO3- sources and biogeochemical controls over diurnal time-scales. Concerted temporal patterns of dissolved oxygen (DO) concentrations and delta O-18(DO) were consistent with photosynthesis, respiration and atmospheric O-2 exchange, providing evidence of diurnal biological processes independent of river discharge. Surface water NO3- concentrations varied by up to 22% over a single diurnal cycle and up to 31% over the 5-day study, but did not reveal concerted diurnal patterns at a frequency comparable to DO concentrations. The decoupling of delta N-15(NO3) and delta O-18(NO3) isotopes suggests that algal assimilation and denitrification are not major processes controlling diurnal NO3- variability in the San Joaquin River during the study. The lack of a clear explanation for NO3- variability likely reflects a combination of riverine biological processes and time-varying physical transport of NO3- from upstream agricultural drains to the mainstem San Joaquin River. The application of an in situ optical NO3- sensor along with discrete samples provides a view into the fine temporal structure of hydrochemical data and may allow for greater accuracy in pollution assessment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In most climate simulations used by the Intergovernmental Panel on Climate Change 2007 fourth assessment report, stratospheric processes are only poorly represented. For example, climatological or simple specifications of time-varying ozone concentrations are imposed and the quasi-biennial oscillation (QBO) of equatorial stratospheric zonal wind is absent. Here we investigate the impact of an improved stratospheric representation using two sets of perturbed simulations with the Hadley Centre coupled ocean atmosphere model HadGEM1 with natural and anthropogenic forcings for the 1979–2003 period. In the first set of simulations, the usual zonal mean ozone climatology with superimposed trends is replaced with a time series of observed zonal mean ozone distributions that includes interannual variability associated with the solar cycle, QBO and volcanic eruptions. In addition to this, the second set of perturbed simulations includes a scheme in which the stratospheric zonal wind in the tropics is relaxed to appropriate zonal mean values obtained from the ERA-40 re-analysis, thus forcing a QBO. Both of these changes are applied strictly to the stratosphere only. The improved ozone field results in an improved simulation of the stepwise temperature transitions observed in the lower stratosphere in the aftermath of the two major recent volcanic eruptions. The contribution of the solar cycle signal in the ozone field to this improved representation of the stepwise cooling is discussed. The improved ozone field and also the QBO result in an improved simulation of observed trends, both globally and at tropical latitudes. The Eulerian upwelling in the lower stratosphere in the equatorial region is enhanced by the improved ozone field and is affected by the QBO relaxation, yet neither induces a significant change in the upwelling trend.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The near-Earth heliospheric magnetic field intensity, |B|, exhibits a strong solar cycle variation, but returns to the same ``floor'' value each solar minimum. The current minimum, however, has seen |B| drop below previous minima, bringing in to question the existence of a floor, or at the very least requiring a re-assessment of its value. In this study we assume heliospheric flux consists of a constant open flux component and a time-varying contribution from CMEs. In this scenario, the true floor is |B| with zero CME contribution. Using observed CME rates over the solar cycle, we estimate the ``no-CME'' |B| floor at ~4.0 +/- 0.3 nT, lower than previous floor estimates and below |B| observed this solar minimum. We speculate that the drop in |B| observed this minimum may be due to a persistently lower CME rate than the previous minimum, though there are large uncertainties in the supporting observational data.