934 resultados para Photon propagation
Resumo:
We study the two-photon propagation (TPP) modelling equations. The one-phase periodic solutions are obtained in an effective form. Their modulation is investigated by means of the Whitham method. The theory developed is applied to the problem of creation of TPP solitons on the sharp front of a long pulse.
Resumo:
Photon propagation is non-dispersive within the context of semiclassical general relativity. What about the remaining massless particles? It can be shown that at the tree level the scattering of massless particles of spin 0, 1/2, 1 or whatever by a static gravitational field generated by a localized source such as the Sun, treated as an external field, is non-dispersive as well. It is amazing, however, that massive particles, regardless of whether they have integral or half-integral spin, experience an energy-dependent gravitational deflection. Therefore, semiclassical general relativity and gravitational rainbows of massive particles can coexist without conflict. We address this issue in this essay.
Resumo:
It is shown that, unlike Einstein's gravity, quadratic gravity produces dispersive photon propagation. The energy-dependent contribution to the deflection of photons passing by the Sun is computed and subsequently the angle at which the visible spectrum would be spread over is plotted as a function of the R-mu nu(2)-sector mass.
Resumo:
The scattering of photons by a static gravitational field, treated as an external field, is discussed in the context of gravity with higher derivatives. It is shown that the R-2 sector of the theory does not contribute to the photon scattering, whereas the R-mu nu(2) sector produces dispersive (energy-dependent) photon propagation.
Resumo:
We investigate the causal structure of general nonlinear electrodynamics and determine which Lagrangians generate an effective metric conformal to Minkowski. We also prove that there is only one analytic nonlinear electrodynamics not presenting birefringence.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.
Resumo:
The Hong-Ou-Mandel (HOM) effect is widely regarded as the quintessential quantum interference phenomenon in optics. In this work we examine how nonlinearity can smear statistical photon bunching in the HOM interferometer. We model both the nonlinearity and a balanced beam splitter with a single two-level system and calculate a finite probability of anti-bunching arising in this geometry. We thus argue that the presence of such nonlinearity would reduce the visibility in the standard HOM setup, offering some explanation for the diminution of the HOM visibility observed in many experiments. We use the same model to show that the nonlinearity affects a resonant two-photon propagation through a two-level impurity in a waveguide due to a " weak photon blockade" caused by the impossibility of double-occupancy and argue that this effect might be stronger for multi-photon propagation.
Resumo:
Purpose To describe the ictal technetium-99 m-ECD SPECT findings in polymicrogyria syndromes (PMG) during epileptic seizures. Methods We investigated 17 patients with PMG syndromes during presurgical workup, which included long-term video-electroencephalographic (EEG) monitoring, neurological and psychiatry assessments, invasive EEG, and the subtraction of ictal-interictal SPECT coregistered to magnetic resonance imaging (MRI) (SISCOM). Results The analysis of the PMG cortex, using SISCOM, revealed intense hyperperfusion in the polymicrogyric lesion during epileptic seizures in all patients. Interestingly, other localizing investigations showed heterogeneous findings. Twelve patients underwent epilepsy surgery, three achieved seizure-freedom, five have worthwhile improvement, and four patients remained unchanged. Conclusions Our study strongly suggests the involvement of PMG in seizure generation or early propagation. Both conventional ictal single-photon emission computed tomography (SPECT) and SISCOM appeared as the single contributive exam to suggest the localization of the epileptogenic zone. Despite the limited number of resective epilepsy surgery in our study (n=9), we found a strong prognostic role of SISCOM in predicting surgical outcome. This result may be of great value on surgical decision-making of whether or not the whole or part of the PMG lesion should be surgically resected.
Resumo:
Two-photon excited (TPE) side illumination fluorescence studies in a Rh6G-RhB dye mixture doped polymer optical fiber (POF) and the effect of energy transfer on the attenuation coefficient is reported. The dye doped POF is pumped sideways using 800 nm, 70 fs laser pulses from a Ti:sapphire laser, and the TPE fluorescence emission is collected from the end of the fiber for different propagation distances. The fluorescence intensity of RhB doped POF is enhanced in the presence of Rh6G as a result of energy transfer from Rh6G to RhB. Because of the reabsorption and reemission process in dye molecules, an effective energy transfer is observed from the shorter wavelength part of the fluorescence spectrum to the longer wavelength part as the propagation distance is increased in dye doped POF. An energy transfer coefficient is found to be higher at shorter propagation distances compared to longer distances. A TPE fluorescence signal is used to characterize the optical attenuation coefficient in dye doped POF. The attenuation coefficient decreases at longer propagation distances due to the reabsorption and reemission process taking place within the dye doped fiber as the propagation distance is increased.
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Compartmental epidemiological models have been developed since the 1920s and successfully applied to study the propagation of infectious diseases. Besides, due to their structure, in the 1960s an interesting version of these models was developed to clarify some aspects of rumor propagation, considering that spreading an infectious disease or disseminating information is analogous phenomena. Here, in an analogy with the SIR (Susceptible-Infected-Removed) epidemiological model, the ISS (Ignorant-Spreader-Stifler) rumor spreading model is studied. By using concepts from the Dynamical Systems Theory, stability of equilibrium points is established, according to propagation parameters and initial conditions. Some numerical experiments are conducted in order to validate the model.
Resumo:
Correlations of charged hadrons of 1< p(T) < 10 Gev/c with high pT direct photons and pi(0) mesons in the range 5< p(T) < 15 Gev/c are used to study jet fragmentation in the gamma + jet and dijet channels, respectively. The magnitude of the partonic transverse momentum, k(T), is obtained by comparing to a model incorporating a Gaussian kT smearing. The sensitivity of the associated charged hadron spectra to the underlying fragmentation function is tested and the data are compared to calculations using recent global fit results. The shape of the direct photon-associated hadron spectrum as well as its charge asymmetry are found to be consistent with a sample dominated by quark-gluon Compton scattering. No significant evidence of fragmentation photon correlated production is observed within experimental uncertainties.
Resumo:
PHENIX has measured the e(+)e(-) pair continuum in root s(NN) = 200 GeV Au+Au and p+p collisions over a wide range of mass and transverse momenta. The e(+)e(-) yield is compared to the expectations from hadronic sources, based on PHENIX measurements. In the intermediate-mass region, between the masses of the phi and the J/psi meson, the yield is consistent with expectations from correlated c (c) over bar production, although other mechanisms are not ruled out. In the low-mass region, below the phi, the p+p inclusive mass spectrum is well described by known contributions from light meson decays. In contrast, the Au+Au minimum bias inclusive mass spectrum in this region shows an enhancement by a factor of 4.7 +/- 0.4(stat) +/- 1.5(syst) +/- 0.9(model). At low mass (m(ee) < 0.3 GeV/c(2)) and high p(T) (1 < p(T) < 5 GeV/c) an enhanced e(+)e(-) pair yield is observed that is consistent with production of virtual direct photons. This excess is used to infer the yield of real direct photons. In central Au+Au collisions, the excess of the direct photon yield over the p+p is exponential in p(T), with inverse slope T = 221 +/- 19(stat) +/- 19(syst) MeV. Hydrodynamical models with initial temperatures ranging from T(init) similar or equal to 300-600 MeV at times of 0.6-0.15 fm/c after the collision are in qualitative agreement with the direct photon data in Au+Au. For low p(T) < 1 GeV/c the low-mass region shows a further significant enhancement that increases with centrality and has an inverse slope of T similar or equal to 100 MeV. Theoretical models underpredict the low-mass, low-p(T) enhancement.
Resumo:
We report the observation at the Relativistic Heavy Ion Collider of suppression of back-to-back correlations in the direct photon+jet channel in Au+Au relative to p+p collisions. Two-particle correlations of direct photon triggers with associated hadrons are obtained by statistical subtraction of the decay photon-hadron (gamma-h) background. The initial momentum of the away-side parton is tightly constrained, because the parton-photon pair exactly balance in momentum at leading order in perturbative quantum chromodynamics, making such correlations a powerful probe of the in-medium parton energy loss. The away-side nuclear suppression factor, I(AA), in central Au+Au collisions, is 0.32 +/- 0.12(stat)+/- 0.09(syst) for hadrons of 3 < p(T)(h)< 5 in coincidence with photons of 5 < p(T)(gamma)< 15 GeV/c. The suppression is comparable to that observed for high-p(T) single hadrons and dihadrons. The direct photon associated yields in p+p collisions scale approximately with the momentum balance, z(T)equivalent to p(T)(h)/p(T)(gamma), as expected for a measurement of the away-side parton fragmentation function. We compare to Au+Au collisions for which the momentum balance dependence of the nuclear modification should be sensitive to the path-length dependence of parton energy loss.