966 resultados para Quasi-particle Scattering
Resumo:
Optimal vaccine strategies must be identified for improving T-cell vaccination against infectious and malignant diseases. MelQbG10 is a virus-like nano-particle loaded with A-type CpG-oligonucleotides (CpG-ODN) and coupled to peptide(16-35) derived from Melan-A/MART-1. In this phase IIa clinical study, four groups of stage III-IV melanoma patients were vaccinated with MelQbG10, given (i) with IFA (Montanide) s.c.; (ii) with IFA s.c. and topical Imiquimod; (iii) i.d. with topical Imiquimod; or (iv) as intralymph node injection. In total, 16/21 (76%) patients generated ex vivo detectable Melan-A/MART-1-specific T-cell responses. T-cell frequencies were significantly higher when IFA was used as adjuvant, resulting in detectable T-cell responses in all (11/11) patients, with predominant generation of effector-memory-phenotype cells. In turn, Imiquimod induced higher proportions of central-memory-phenotype cells and increased percentages of CD127(+) (IL-7R) T cells. Direct injection of MelQbG10 into lymph nodes resulted in lower T-cell frequencies, associated with lower proportions of memory and effector-phenotype T cells. Swelling of vaccine site draining lymph nodes, and increased glucose uptake at PET/CT was observed in 13/15 (87%) of evaluable patients, reflecting vaccine triggered immune reactions in lymph nodes. We conclude that the simultaneous use of both Imiquimod and CpG-ODN induced combined memory and effector CD8(+) T-cell responses.
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
In this study, equations for the calculation of erosion wear caused by ash particles on convective heat exchanger tubes of steam boilers are presented. Anew, three-dimensional test arrangement was used in the testing of the erosion wear of convective heat exchanger tubes of steam boilers. When using the sleeve-method, three different tube materials and three tube constructions could be tested. New results were obtained from the analyses. The main mechanisms of erosionwear phenomena and erosion wear as a function of collision conditions and material properties have been studied. Properties of fossil fuels have also been presented. When burning solid fuels, such as pulverized coal and peat in steam boilers, most of the ash is entrained by the flue gas in the furnace. In bubbling andcirculating fluidized bed boilers, particle concentration in the flue gas is high because of bed material entrained in the flue gas. Hard particles, such as sharp edged quartz crystals, cause erosion wear when colliding on convective heat exchanger tubes and on the rear wall of the steam boiler. The most important ways to reduce erosion wear in steam boilers is to keep the velocity of the flue gas moderate and prevent channelling of the ash flow in a certain part of the cross section of the flue gas channel, especially near the back wall. One can do this by constructing the boiler with the following components. Screen plates can beused to make the velocity and ash flow distributions more even at the cross-section of the channel. Shield plates and plate type constructions in superheaters can also be used. Erosion testing was conducted with three types of tube constructions: a one tube row, an inline tube bank with six tube rows, and a staggered tube bank with six tube rows. Three flow velocities and two particle concentrations were used in the tests, which were carried out at room temperature. Three particle materials were used: quartz, coal ash and peat ash particles. Mass loss, diameter loss and wall thickness loss measurements of the test sleeves were taken. Erosion wear as a function of flow conditions, tube material and tube construction was analyzed by single-variable linear regression analysis. In developing the erosion wear calculation equations, multi-variable linear regression analysis was used. In the staggered tube bank, erosion wear had a maximum value in a tube row 2 and a local maximum in row 5. In rows 3, 4 and 6, the erosion rate was low. On the other hand, in the in-line tube bank the minimum erosion rate occurred in tube row 2 and in further rows the erosion had an increasing value, so that in a six row tube bank, the maximum value occurred in row 6.
Resumo:
Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.
Resumo:
La distribución del número y del volumen de partículas, y la eficiencia de eliminación de las partículas y los sólidos en suspensión de diferentes efluentes y sus filtrados, fueron analizadas para estudiar si los filtros más usuales en los sistemas de riego localizado eliminan las partículas que pueden obturar los goteros. En la mayoría de los efluentes y filtrados fue mínimo el número de partículas con diámetros superiores a 20 μm. Sin embargo, al analizar la distribución del volumen de las partículas, en los filtrados aparecieron partículas de dimensiones superiores a la luz de los filtros de anillas y malla, siendo el filtro de arena el que retuvo las partículas de mayor diámetro. La eficiencia de los filtros para retener partículas se debió más al tipo de efluente que al filtro. Se verificó también que la distribución del número de partículas sigue una relación de tipo potencial. Analizando el exponente β de la ley potencial, se halló que los filtros no modificaron significativamente la distribución del número de partículas de los efluentes.
Resumo:
SEPServer is a three-year collaborative project funded by the seventh framework programme (FP7-SPACE) of the European Union. The objective of the project is to provide access to state-of-the-art observations and analysis tools for the scientific community on solar energetic particle (SEP) events and related electromagnetic (EM) emissions. The project will eventually lead to better understanding of the particle acceleration and transport processes at the Sun and in the inner heliosphere. These processes lead to SEP events that form one of the key elements of space weather. In this paper we present the first results from the systematic analysis work performed on the following datasets: SOHO/ERNE, SOHO/EPHIN, ACE/EPAM, Wind/WAVES and GOES X-rays. A catalogue of SEP events at 1 AU, with complete coverage over solar cycle 23, based on high-energy (~68-MeV) protons from SOHO/ERNE and electron recordings of the events by SOHO/EPHIN and ACE/EPAM are presented. A total of 115 energetic particle events have been identified and analysed using velocity dispersion analysis (VDA) for protons and time-shifting analysis (TSA) for electrons and protons in order to infer the SEP release times at the Sun. EM observations during the times of the SEP event onset have been gathered and compared to the release time estimates of particles. Data from those events that occurred during the European day-time, i.e., those that also have observations from ground-based observatories included in SEPServer, are listed and a preliminary analysis of their associations is presented. We find that VDA results for protons can be a useful tool for the analysis of proton release times, but if the derived proton path length is out of a range of 1 AU < s[3 AU, the result of the analysis may be compromised, as indicated by the anti-correlation of the derived path length and release time delay from the asso ciated X-ray flare. The average path length derived from VDA is about 1.9 times the nominal length of the spiral magnetic field line. This implies that the path length of first-arriving MeV to deka-MeV protons is affected by interplanetary scattering. TSA of near-relativistic electrons results in a release time that shows significant scatter with respect to the EM emissions but with a trend of being delayed more with increasing distance between the flare and the nominal footpoint of the Earth-connected field line.
Resumo:
In two previous papers [J. Differential Equations, 228 (2006), pp. 530 579; Discrete Contin. Dyn. Syst. Ser. B, 6 (2006), pp. 1261 1300] we have developed fast algorithms for the computations of invariant tori in quasi‐periodic systems and developed theorems that assess their accuracy. In this paper, we study the results of implementing these algorithms and study their performance in actual implementations. More importantly, we note that, due to the speed of the algorithms and the theoretical developments about their reliability, we can compute with confidence invariant objects close to the breakdown of their hyperbolicity properties. This allows us to identify a mechanism of loss of hyperbolicity and measure some of its quantitative regularities. We find that some systems lose hyperbolicity because the stable and unstable bundles approach each other but the Lyapunov multipliers remain away from 1. We find empirically that, close to the breakdown, the distances between the invariant bundles and the Lyapunov multipliers which are natural measures of hyperbolicity depend on the parameters, with power laws with universal exponents. We also observe that, even if the rigorous justifications in [J. Differential Equations, 228 (2006), pp. 530-579] are developed only for hyperbolic tori, the algorithms work also for elliptic tori in Hamiltonian systems. We can continue these tori and also compute some bifurcations at resonance which may lead to the existence of hyperbolic tori with nonorientable bundles. We compute manifolds tangent to nonorientable bundles.
Resumo:
We show that the quasifission paths predicted by the one-body dissipation dynamics, in the slowest phase of a binary reaction, follow a quasistatic path, which represents a sequence of states of thermal equilibrium at a fixed value of the deformation coordinate. This establishes the use of the statistical particle-evaporation model in the case of dynamical time-evolving systems. Pre- and post-scission multiplicities of neutrons and total multiplicities of protons and α particles in fission reactions of 63Cu+92Mo, 60Ni+100Mo, 63Cu+100Mo at 10 MeV/u and 20Ne+144,148,154Sm at 20 MeV/u are reproduced reasonably well with statistical model calculations performed along dynamic trajectories whose slow stage (from the most compact configuration up to the point where the neck starts to develop) lasts some 35×10−21 s.
Resumo:
Abstract. The deep outer margin of the Gulf of Lions and the adjacent basin, in the western Mediterranean Sea, are regularly impacted by open-ocean convection, a major hydrodynamic event responsible for the ventilation of the deep water in the western Mediterranean Basin. However, the impact of open-ocean convection on the flux and transport of particulate matter remains poorly understood. The variability of water mass properties (i.e., temperature and salinity), currents, and particle fluxes were monitored between September 2007 and April 2009 at five instrumented mooring lines deployed between 2050 and 2350-m depth in the deepest continental margin and adjacent basin. Four of the lines followed a NW-SE transect, while the fifth one was located on a sediment wave field to the west. The results of the main, central line SC2350 ("LION") located at 42 02.50 N, 4 410 E, at 2350-m depth, show that open-ocean convection reached midwater depth ( 1000-m depth) during winter 2007-2008, and reached the seabed ( 2350-m depth) during winter 2008-2009. Horizontal currents were unusually strong with speeds up to 39 cm s−1 during winter 2008-2009. The measurements at all 5 different locations indicate that mid-depth and near-bottom currents and particle fluxes gave relatively consistent values of similar magnitude across the study area except during winter 2008-2009, when near-bottom fluxes abruptly increased by one to two orders of magnitude. Particulate organic carbon contents, which generally vary between 3 and 5 %, were abnormally low ( 1 %) during winter 2008-2009 and approached those observed in surface sediments (0.6 %). Turbidity profiles made in the region demonstrated the existence of a bottom nepheloid layer, several hundred meters thick, and related to the resuspension of bottom sediments. These observations support the view that open-ocean deep convection events in the Gulf of Lions can cause significant remobilization of sediments in the deep outer margin and the basin, with a subsequent alteration of the seabed likely impacting the functioning of the deep-sea ecosystem.
Resumo:
This thesis deals with combinatorics, order theory and descriptive set theory. The first contribution is to the theory of well-quasi-orders (wqo) and better-quasi-orders (bqo). The main result is the proof of a conjecture made by Maurice Pouzet in 1978 his thèse d'état which states that any wqo whose ideal completion remainder is bqo is actually bqo. Our proof relies on new results with both a combinatorial and a topological flavour concerning maps from a front into a compact metric space. The second contribution is of a more applied nature and deals with topological spaces. We define a quasi-order on the subsets of every second countable To topological space in a way that generalises the Wadge quasi-order on the Baire space, while extending its nice properties to virtually all these topological spaces. The Wadge quasi-order of reducibility by continuous functions is wqo on Borei subsets of the Baire space, this quasi-order is however far less satisfactory for other important topological spaces such as the real line, as Hertling, Ikegami and Schlicht notably observed. Some authors have therefore studied reducibility with respect to some classes of discontinuous functions to remedy this situation. We propose instead to keep continuity but to weaken the notion of function to that of relation. Using the notion of admissible representation studied in Type-2 theory of effectivity, we define the quasi-order of reducibility by relatively continuous relations. We show that this quasi-order both refines the classical hierarchies of complexity and is wqo on the Borei subsets of virtually every second countable To space - including every (quasi-)Polish space. -- Cette thèse se situe dans les domaines de la combinatoire, de la théorie des ordres et de la théorie descriptive. La première contribution concerne la théorie des bons quasi-ordres (wqo) et des meilleurs quasi-ordres (bqo). Le résultat principal est la preuve d'une conjecture, énoncée par Pouzet en 1978 dans sa thèse d'état, qui établit que tout wqo dont l'ensemble des idéaux non principaux ordonnés par inclusion forme un bqo est alors lui-même un bqo. La preuve repose sur de nouveaux résultats, qui allient la combinatoire et la topologie, au sujet des fonctions d'un front vers un espace métrique compact. La seconde contribution de cette thèse traite de la complexité topologique dans le cadre des espaces To à base dénombrable. Dans le cas de l'espace de Baire, le quasi-ordre de Wadge est un wqo sur les sous-ensembles Boréliens qui a suscité énormément d'intérêt. Cependant cette relation de réduction par fonctions continues s'avère bien moins satisfaisante pour d'autres espaces d'importance tels que la droite réelle, comme l'ont fait notamment remarquer Hertling, Schlicht et Ikegami. Nous proposons de conserver la continuité et d'affaiblir la notion de fonction pour celle de relation. Pour ce faire, nous utilisons la notion de représentation admissible étudiée en « Type-2 theory of effectivity » initiée par Weihrauch. Nous introduisons alors le quasi-ordre de réduction par relations relativement continues et montrons que celui-ci à la fois raffine les hiérarchies classiques de complexité topologique et forme un wqo sur les sous-ensembles Boréliens de chaque espace quasi-Polonais.
Resumo:
Among unidentified gamma-ray sources in the galactic plane, there are some that present significant variability and have been proposed to be high-mass microquasars. To deepen the study of the possible association between variable low galactic latitude gamma-ray sources and microquasars, we have applied a leptonic jet model based on the microquasar scenario that reproduces the gamma-ray spectrum of three unidentified gamma-ray sources, 3EG J1735-1500, 3EG J1828+0142 and GRO J1411-64, and is consistent with the observational constraints at lower energies. We conclude that if these sources were generated by microquasars, the particle acceleration processes could not be as efficient as in other objects of this type that present harder gamma-ray spectra. Moreover, the dominant mechanism of high-energy emission should be synchrotron self-Compton (SSC) scattering, and the radio jets may only be observed at low frequencies. For each particular case, further predictions of jet physical conditions and variability generation mechanisms have been made in the context of the model. Although there might be other candidates able to explain the emission coming from these sources, microquasars cannot be excluded as counterparts. Observations performed by the next generation of gamma-ray instruments, like GLAST, are required to test the proposed model.