319 resultados para E E-ANNIHILATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of the AEgIS experiment is to measure the gravitational acceleration of antihydrogen – the simplest atom consisting entirely of antimatter – with the ultimate precision of 1%. We plan to verify the Weak Equivalence Principle (WEP), one of the fundamental laws of nature, with an antimatter beam. The experiment consists of a positron accumulator, an antiproton trap and a Stark accelerator in a solenoidal magnetic field to form and accelerate a pulsed beam of antihydrogen atoms towards a free-fall detector. The antihydrogen beam passes through a moir ́e deflectometer to measure the vertical displacement due to the gravitational force. A position and time sensitive hybrid detector registers the annihilation points of the antihydrogen atoms and their time-of-flight. The detection principle has been successfully tested with antiprotons and a miniature moir ́e deflectometer coupled to a nuclear emulsion detector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ion beam therapy is a valuable method for the treatment of deep-seated and radio-resistant tumors thanks to the favorable depth-dose distribution characterized by the Bragg peak. Hadrontherapy facilities take advantage of the specific ion range, resulting in a highly conformal dose in the target volume, while the dose in critical organs is reduced as compared to photon therapy. The necessity to monitor the delivery precision, i.e. the ion range, is unquestionable, thus different approaches have been investigated, such as the detection of prompt photons or annihilation photons of positron emitter nuclei created during the therapeutic treatment. Based on the measurement of the induced β+ activity, our group has developed various in-beam PET prototypes: the one under test is composed by two planar detector heads, each one consisting of four modules with a total active area of 10 × 10 cm2. A single detector module is made of a LYSO crystal matrix coupled to a position sensitive photomultiplier and is read-out by dedicated frontend electronics. A preliminary data taking was performed at the Italian National Centre for Oncological Hadron Therapy (CNAO, Pavia), using proton beams in the energy range of 93–112 MeV impinging on a plastic phantom. The measured activity profiles are presented and compared with the simulated ones based on the Monte Carlo FLUKA package.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como seel caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo aborda el análisis de la idea de monumentalidad, así como el diseño y la construcción de monumentos concretos, a la finalización de la Segunda Guerra Mundial, prestando especial atención al intento del Movimiento Moderno de introducirse en un campo que hasta entonces le había sido ajeno. Entendiendo que el monumento es ante todo un artefacto para la memoria, y analizando las teorías de sociólogos como Émile Durkheim, Maurice Halbwachs, Jan Assmann o Iwona Irwin-Zarecka, la tesis se propone explicar el papel que juegan los monumentos en la creación de una memoria colectiva que, a diferencia de la historia, es una recopilación selectiva de acontecimientos del pasado cuyo fin es procurar y celebrar la permanencia del grupo social. También se propone analizar el papel del monumento como elemento de estabilidad en el paisaje urbano que genera de forma natural el apego de los ciudadanos, puesto que forma parte destacada del marco espacial en el que se han desarrollado sus vidas. Desde estas dos facetas se pretende justificar la necesidad de monumentos que experimenta cualquier grupo social, y por qué las guerras, que ponen en peligro la estructura, e incluso la propia vida del grupo, son acontecimientos que generan una tendencia especial a la construcción de monumentos que conjuren el peligro al que éste se ha visto sometido. Se explicarán las razones por las que la conmemoración de la Segunda Guerra Mundial se volvió especialmente problemática. Entre las principales, la desaparición de fronteras entre frente y retaguardia, entre objetivos militares y civiles; por otra parte la despersonalización de la acción bélica como consecuencia de la aplicación de la tecnología; en tercer lugar el papel de los medios de comunicación de masas, que por primera vez en la historia irrumpieron de forma masiva en una guerra, y ofrecían imágenes instantáneas, más impactantes y con un aura de realidad con la que el monumento convencional no era capaz de competir; en cuarto lugar el inicio de la era atómica, que enfrentaba por primera vez a la humanidad a la posibilidad de su destrucción total; y finalmente la experiencia del Holocausto, en cuanto que aniquilación carente de objetivo e ideología, que se servía del progreso de la ciencia para ganar en eficiencia, y que puso de manifiesto la manipulabilidad de la tecnología al servicio de unos intereses particulares. Como respuesta a esta dificultad para la conmemoración, se popularizaron dos fórmulas hasta entonces marginales que podemos considerar características del momento: una de ellas es el living memorial, que trataba de ofrecer una lectura constructiva de la guerra poniendo de relieve determinadas funciones prácticas de carácter democrático, cultural, deportivo, etc. que se presentaban como los frutos por los que se había combatido en la guerra. En esta fórmula es donde el Movimiento Moderno encontró la posibilidad de abordar nuevos proyectos, en los que la función estaba presente pero no era el ingrediente determinante, lo que obligaría a un enriquecimiento del lenguaje con el que responder a la dimensión emotiva del monumento. Y si bien hay en esta época edificios modernos que podemos calificar justamente de monumentos, el desplazamiento del centro del debate teórico hacia cuestiones estilísticas y expresivas limitó considerablemente la claridad de los enunciados anteriores y la posibilidad de consenso. Dentro de los living memorials, las sedes de la Organización de Naciones Unidas y sus correspondientes agencias representaron la mayor esperanza del Movimiento Moderno por construir un auténtico monumento. Sin embargo, el sistema de trabajo en grupo, con su correspondiente conflicto de personalidades, la ausencia de proyección de los edificios sobre el espacio urbano anexo, y sobre todo el propio descrédito que comenzaron a sufrir las instituciones con el comienzo de la Guerra Fría, frustraron esta posibilidad. La segunda fórmula conmemorativa sería el monumento de advertencia o mahnmal, que renuncia a cualquier rasgo de heroísmo o romanticismo, y se concentra simplemente en advertir de los riesgos que implica la guerra. Dicha fórmula se aplicó fundamentalmente en los países vencidos, y generalmente no por iniciativa propia, sino como imposición de los vencedores, que de alguna forma aprovechaban la ocasión para hacer examen de conciencia lejos de la opinión pública de sus respectivos países. ABSTRACT This paper explores the idea of monumentality through the analysis of the design and construction of several monuments at the end ofWorldWar II. It pays particular attention to the attempt of the Modern Movement to enter a field that had been ignored until this moment. With the assumption that a monument is primarily a mnemonic device, this thesis focuses on the thinking of sociologists like Émile Durkheim, Maurice Halbwachs, Jan Assmann or Iwona Irwin-Zarecka, with the aim of explaining the role of monuments in the creation of a collective memory which, unlike history, consists of past events selected in order to secure and celebrate the permanence of a social group. It also considers the role of monuments as elements of stability in the urban landscape that naturally get assimilated by society, since they are prominent elements in the shared spaces of daily life. These two features explain the need felt by any society for monuments, and how wars, events that endanger the structure and even the existence of that same society, generate a special tendency to build monuments to conjure that inherent danger. The reasons why the memorializing of World War II became especially problematic will be explained. Primary among them is the blurring of boundaries between the front line and the domestic front, between military and civilian targets; moreover, the depersonalization of warfare as a result of advances in technology; thirdly, the role of mass media, which for the first time in history extensively covered a war, instantly broadcasting images of such power and with such an aura of reality that conventional monuments became obsolete; fourthly, the beginning of the atomic age, which meant that mankind faced the possibility of complete destruction; and finally the Holocaust, a racial annihilation devoid of purpose and ideology, which took advantage of scientific progress to gain efficiency, manipulating technology to serve particular interests. In response to this difficulty in commemorating wars, two formulas hitherto marginal gained such popularity as to become prototypes: one was the living memorial, offering a constructive reading of the war by hosting certain practical functions of democratic, cultural or sporting nature. Living memorials presented themselves as the image of the outcome for which the war had been fought. The Modern Movement found in this formula the opportunity for tackling new projects, in which function was present but not as the determining ingredient; in turn, they would require an enhancement of language in order to account for the emotional dimension of the monument. And while there are modern buildings at this time that we can justly describe as monuments, the displacement of the focus of the theoretical debate to stylistic and expressive issues considerably limited the clarity of previous statements and the possibility of consensus. Among all living memorials, the headquarters of the United Nations Organization and its satellite agencies represented the ultimate hope of the Modern Movement to build an authentic monument. However, the group-based design process, the fight of egos it caused, the lack of presence of these buildings over the adjacent urban space, and especially the very discredit that these institutions began to suffer with the onset of the Cold War, all thwarted this expectation. The second commemorative formula was the warning monument or Mahnmal, which rejects any trace of heroism or romanticism, and simply focuses on warning about the risks of war. This formula was mainly used in defeated countries, and generally not on their own initiative, but as an imposition of the victors, which seized the opportunity to do some soul-searching far away from the public opinion of their respective countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The War has brought us into a close relation with Armenia. The annihilation of her people in 1915 and 1916 aroused universal sympathy. For the most of us Armenia, hitherto, had been relegated to a position of partial obscurity in the Near East. We were acquainted with the fact that they had suffered persecution beore at Turkish hands but an indifference born of unfamiliarity with her history, customs and people still continued with us. However much the gulf separating ourselves and these people has been narrowed by the war it is only by an actual journey into their life past and present that we can ever come into a full appreciation of a people who despite persecution and oppression are potentially fine citizens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pulsed coherent excitation of a two-level atom strongly coupled to a resonant cavity mode will create a superposition of two coherent states of opposite amplitudes in the field. By choosing proper parameters of interaction time and pulse shape the field after the pulse will be almost disentangled from the atom and can be efficiently outcoupled through cavity decay. The fidelity of the generation approaches unity if the atom-field coupling strength is much larger than the atomic and cavity decay rates. This implies a strong difference between even and odd output photon number counts. Alternatively, the coherence of the two generated field components can be proven by phase-dependent annihilation of the generated nonclassical superposition state by a second pulse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solutions employing perturbation stiffness or viscous hourglass control with one-point quadrature finite elements often exhibit spurious modes in the intermediate frequency range. These spurious frequencies are demonstrated in several examples and their origin is explained. Then it is shown that by critically damping the hourglass modes, these spurious mid-range frequency modes can be suppressed. Estimates of the hourglass frequency and damping coefficients are provided for the plane 4-node quadrilateral and a 4-node shell element. Results are presented that show almost complete annihilation of spurious intermediate frequency modes for both linear and non-linear problems. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We review the recent progress on the construction of the determinant representations of the correlation functions for the integrable supersymmetric fermion models. The factorizing F-matrices (or the so-called F-basis) play an important role in the construction. In the F-basis, the creation (and the annihilation) operators and the Bethe states of the integrable models are given in completely symmetric forms. This leads to the determinant representations of the scalar products of the Bethe states for the models. Based on the scalar products, the determinant representations of the correlation functions may be obtained. As an example, in this review, we give the determinant representations of the two-point correlation function for the U-q(gl(2 vertical bar 1)) (i.e. q-deformed) supersymmetric t-J model. The determinant representations are useful for analyzing physical properties of the integrable models in the thermodynamical limit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os missionários protestantes presbiterianos que vieram para o Brasil no início da segunda metade do século XIX trouxeram uma interpretação calvinista da bíblia, pois permaneceram fieis à formação princetoniana que efetivou uma síntese entre ortodoxia calvinista e pietismo. Estes pricetonianos tinham como base epistemológica a filosofia de Thomas Reid, conhecida como o Realismo do Senso Comum. Essa filosofia é utilizada como uma epistemologia reformada, ou calvinista. Ela é compreendida em sua formação escocesa e consequentemente americana, via Princeton, como a Epistemologia Providencial. Desta forma, quando ela é assimilada pelos brasileiros por meio da pregação e da formação teológica, a mesma se torna parte do perfil presbiteriano brasileiro como doutrina filosófica. A Filosofia do Senso Comum se gesta como crítica à filosofia empirista de David Hume que, para Reid, convergiria para um possível aniquilamento da religião e para uma visão pessimista da ciência, afetando o empirismo, por conseguinte, causando uma nova formulação mais próxima do ceticismo. Por isso, Reid formulou a filosofia que para ele contrapõe-se a Locke e Berkeley e depois a David Hume, afirmando que a realidade é independente de nossa apreensão. Ou seja, na percepção do mundo exterior não há interferência do sujeito cognoscente sobre o objeto do conhecimento. A nossa relação com os objetos é direta e não deve ser desvirtuada por intermediações. Na implantação do protestantismo no Brasil, via missionários de Princeton, não houve uma defesa intransigente dos princípios calvinistas por parte de missionários como Fletcher e Simonton e sim uma continuidade da leitura das escrituras sagradas pelo viés calvinista, como era feito no Seminário de Princeton. Não havia uma ênfase acentuada na defesa da ortodoxia porque o tema do liberalismo teológico, ou do conflito entre modernismo e fundamentalismo não se fazia necessário na conjuntura local, onde predominava a preocupação pela evangelização em termos práticos. O conceitos da Filosofia do Senso Comum eram próximos do empirismo mitigado de Silvestre Pinheiro e do Ecletismo de Victor Cousin. Por isso, no Brasil, o local em que mais se vê a utilização da filosofia do Senso Comum é nos debates entre intelectuais, em três pontos interessantes: 1ª) O Senso Comum ficou restrito ao espaço acadêmico, na formação de novos pastores, sendo que as obras de Charles Hodge e A. A. Hodge são as principais fontes de implantação desta mentalidade ratificadora da experiência religiosa e, desta forma, delineiam o rosto do protestantismo entre presbiterianos, uma das principais denominações protestantes do final do século XIX; 2ª) Nos debates entre clérigos católicos e protestantes em polêmicas teológicas;. 3º) No aproveitamento utilitarista da assimilação cultural estrangeira pelos protestantes nacionais, não por último, facilitada pela simpatia dos liberais brasileiros pelo protestantismo, ao mesmo tempo que mantinham uma linha filosófica mais próxima do empirismo mitigado e do ecletismo. Assim, nossa hipótese pretende demonstrar que os protestantes trouxeram em seu bojo as formulações epistemológicas que foram passadas para um grupo de intelectuais, que formaram o quadro dos primeiros pastores presbiterianos da história desta denominação. Eles foram convertidos e assimilaram melhor as novas doutrinas por meio de mais do que simples pregações, mas pela sua forma filosófica de encarar os objetos estudados, e que tais informações vêm por meio da base epistemológica do Realismo do Senso Comum, que encontra espaço nos ideais republicanos brasileiros do século XIX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a multi-agent based model to simulate a population which comprises of two ethnic groups and a peacekeeping force. We investigate the effects of different strategies for civilian movement to the resulting violence in this bi-communal population. Specifically, we compare and contrast random and race-based migration strategies. Race-based migration leads the formation of clusters. Previous work in this area has shown that same-race clustering instigates violent behavior in otherwise passive segments of the population. Our findings confirm this. Furthermore, we show that in settings where only one of the two races adopts race-based migration it is a winning strategy especially in violently predisposed populations. On the other hand, in relatively peaceful settings clustering is a restricting factor which causes the race that adopts it to drift into annihilation. Finally, we show that when race-based migration is adopted as a strategy by both ethnic groups it results in peaceful co-existence even in the most violently predisposed populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The War has brought us into a close relation with Armenia. The annihilation of her people in 1915 and 1916 aroused universal sympathy. For the most of us Armenia, hitherto, had been relegated to a position of partial obscurity in the Near East. We were acquainted with the fact that they had suffered persecution beore at Turkish hands but an indifference born of unfamiliarity with her history, customs and people still continued with us. However much the gulf separating ourselves and these people has been narrowed by the war it is only by an actual journey into their life past and present that we can ever come into a full appreciation of a people who despite persecution and oppression are potentially fine citizens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Durante el siglo XIII se produjo una sucesión de revueltas que supuso la desaparición del Imperio almohade y su sustitución por poderes regionales en al-Andalus, el Magreb y el Magreb al-Aqsà. La historiografía ha presentado el surgimiento y pugna entre estos poderes como un fenómeno social, político e, incluso, cultural y religioso, con el que se ha podido explicar su aniquilación o marginalización. Este trabajo pretende contextualizar los hechos desde una perspectiva medioambiental, de forma que la desintegración del califato almohade, el surgimiento de aquellos poderes y la progresión de los reinos cristianos en la península ibérica puedan entenderse desde una visión global de cambio climático y una posible crisis agrícola.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tendo como ponto de partida a leitura intertextual das obras Caim, de José Saramago, e Gaspard, Melchior et Balthazar, de Michel Tournier, encaradas à luz da Teoria da Carnavalização do filósofo russo Mikhail Bakhtin, procura-se estabelecer afinidades e antinomias passíveis de constarem na temática da revisitação subversiva de mitos bíblicos. Assim sendo, pretende-se concluir, nomeadamente, acerca da relevância do diálogo efetuado pelos autores em torno da verdade oficial e sua reinterpretação, ainda que insolente; da validade da viagem de iniciação, enquanto lenitivo, “via-sacra” e demanda de absoluta Plenitude, empreendida pelos heróis e pelos anti-heróis; do grotesco na representação carnavalesca do corpo e da vida terrena; da presença de um discurso narrativo fazendo uso de uma linguagem subversiva, onde grotesco, ironia e/ou paródia são percetíveis; dos valores morais e ética social a preservar versus crítica acérrima ao poder instituído; da queda do Homem e do confronto com Deus e suas imperdoáveis limitações “humanas”; da presença do binómio entidade divina/entidade mefistofélica e o modo como as várias vozes narrativas surgem articuladas. Concomitantemente, pretende-se comprovar que os textos de José Saramago e Michel Tournier, embora mergulhando no desmascaramento, “profanação” e aparente niilismo do Texto Bíblico, na dessacralização de um cosmos oficial e na adoção do riso e impertinência enquanto catarse, longe de provocarem a aniquilação do mito, antes concorrem para uma releitura profícua, pois repleta de pluralidade de significados, onde o Transcendente, por oposição à desilusão prodigalizada pela vida terrena, é objeto de revitalização; pretendendo autores, narradores e leitores, postos em diálogo polifónico, a não reiteração de modos perniciosos deestar” e “ser”, mas um olhar lúcido e puro sobre o futuro da Humanidade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The author of article considers the growing role of technology in contemporary society in the context of its role in creating identity and shaping the daily life. Various aspects of Elllul’s conception of technological determinism are reconstructed, including the problem (lack) of freedom in relationships between the individual and technology as well as the problem of annihilation of time and space. The main problem of article is analysed both on the micro and the macro levels, considering the possible contradictions in the use of modern technology devices.