34 resultados para Single hard diffraction
em Helda - Digital Repository of University of Helsinki
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.
Resumo:
At the Tevatron, the total p_bar-p cross-section has been measured by CDF at 546 GeV and 1.8 TeV, and by E710/E811 at 1.8 TeV. The two results at 1.8 TeV disagree by 2.6 standard deviations, introducing big uncertainties into extrapolations to higher energies. At the LHC, the TOTEM collaboration is preparing to resolve the ambiguity by measuring the total p-p cross-section with a precision of about 1 %. Like at the Tevatron experiments, the luminosity-independent method based on the Optical Theorem will be used. The Tevatron experiments have also performed a vast range of studies about soft and hard diffractive events, partly with antiproton tagging by Roman Pots, partly with rapidity gap tagging. At the LHC, the combined CMS/TOTEM experiments will carry out their diffractive programme with an unprecedented rapidity coverage and Roman Pot spectrometers on both sides of the interaction point. The physics menu comprises detailed studies of soft diffractive differential cross-sections, diffractive structure functions, rapidity gap survival and exclusive central production by Double Pomeron Exchange.
Resumo:
The thesis concentrates on two questions: the translation of metaphors in literary texts, and the use of semiotic models and tools in translation studies. The aim of the thesis is to present a semiotic, text based model designed to ease the translation of metaphors and to analyze translated metaphors. In the translation of metaphors I will concentrate on the central problem of metaphor translations: in addition to its denotation and connotation, a single metaphor may contain numerous culture or genre specific meanings. How can a translator ensure the translation of all meanings relevant to the text as a whole? I will approach the question from two directions. Umberto Eco's holistic text analysis model provides an opportunity to concentrate on the problematic nature of metaphor translation from the level of a text as a specific entity, while George Lakoff's and Mark Johnson's metaphor research makes it possible to approach the question from the level of individual metaphors. On the semiotic side, the model utilizes Eero Tarasti's existential semiotics supported by Algirdas Greimas' actant model and Yuri Lotman's theory of cultural semiotics. In the model introduced in the thesis, individual texts are deconstructed through Eco's model into elements. The textual roles and features of these elements are distilled further through Tarasti's model into their coexistent meaning levels. The priorization and analysis of these meaning levels provide an opportunity to consider the contents and significance of specific metaphors in relation to the needs of the text as a whole. As example texts, I will use Motörhead's hard rock classic Iron Horse/Born to Lose and its translation into Rauta-airot by Viikate. I will use the introduced model to analyze the metaphors in the source and target texts, and to consider the transfer of culture specific elements between the languages and cultural borders. In addition, I will use the analysis process to examine the validity of the model introduced in the thesis.
Resumo:
This study analyses British military planning and actions during the Suez Crisis in 1956. It seeks to find military reasons for the change of concepts during the planning and compares these reasons with the tactical doctrines of the time. The thesis takes extensive advantage of military documents preserved in the National Archives, London. In order to expand the understanding of the exchange of views during the planning process, the private papers of high ranking military officials have also been consulted. French military documents preserved in the Service Historique de la Defence, Paris, have provided an important point of comparison. The Suez Crisis caught the British armed forces in the middle of a transition phase. The main objective of the armed forces was to establish a credible deterrence against the Soviet Union. However, due to overseas commitments the Middle East playing a paramount role because of its economic importance the armed forces were compelled to also prepare for Limited War and the Cold War. The armed forces were not fully prepared to meet this demand. The Middle Eastern garrison was being re-organised after the withdrawal from the Canal Base and the concept for a strategic reserve was unimplemented. The tactical doctrines of the time were based on experiences from the Second World War. As a result, the British view of amphibious operations and the subsequent campaigns emphasised careful planning, mastery of the sea and the air, sufficient superiority in numbers and firepower, centralised command and extensive administrative preparations. The British military had realized that Nasser could nationalise the Suez Canal and prepared an outline plan to meet this contingency. Although the plan was nothing more than a concept, it was accepted as a basis for further planning when the Canal was nationalised at the end of July. This plan was short-lived. The nominated Task Force Commanders shifted the landing site from Port Said to Alexandria because it enabled faster expansion of the bridgehead. In addition, further operations towards Cairo the hub of Nasser s power would be easier to conduct. The operational concept can be described as being traditional and was in accordance with the amphibious warfare doctrine. This plan was completely changed at the beginning of September. Apparently, General Charles Keightley, the Commander-in-Chief, and the Chairman of the Chiefs of Staff Committee developed the idea of prolonged aerial operations. The essence of the concept was to break the Egyptian will to resist by attacking the oil facilities, the transportation system and the armed forces. This victory through air concept would be supported by carefully planned psychological operations. This concept was in accordance with the Royal Air Force doctrine, which promoted a bomber offensive against selected target categories. General Keightley s plan was accepted despite suspicions at every planning level. The Joint Planning Staff and the Task Force Commanders opposed the concept from the beginning to the end because of its unpredictability. There was no information that suggested the bombing would persuade the Egyptians to submit. This problem was worsened by the fact that British intelligence was unable to provide reliable strategic information. The Task Force Commanders, who were responsible for the tactical plans, were not able to change Keightley s mind, but the concept was expanded to include a traditional amphibious assault on Port Said due to their resistance. The bombing campaign was never tested as the Royal Air Force was denied authorisation to destroy the transportation and oil targets. The Chiefs of Staff and General Keightley were too slow to realise that the execution of the plan depended on the determination of the Prime Minister. However, poor health, a lack of American and domestic support and the indecisiveness of the military had ruined Eden s resolve. In the end, a very traditional amphibious assault, which was bound to succeed at the tactical level but fail at the strategic level, was launched against Port Said.
Resumo:
This thesis combines a computational analysis of a comprehensive corpus of Finnish lake names with a theoretical background in cognitive linguistics. The combination results on the one hand in a description of the toponymic system and the processes involved in analogy-based naming and on the other hand some adjustments to Construction Grammar. Finnish lake names are suitable for this kind of study, as they are to a large extent semantically transparent even when relatively old. There is also a large number of them, and they are comprehensively collected in a computer database. The current work starts with an exploratory computational analysis of co-location patterns between different lake names. Such an analysis makes it possible to assess the importance of analogy and patterns in naming. Prior research has suggested that analogy plays an important role, often also in cases where there are other motivations for the name, and the current study confirms this. However, it also appears that naming patterns are very fuzzy and that their nature is somewhat hard to define in an essentially structuralist tradition. In describing toponymic structure and the processes involved in naming, cognitive linguistics presents itself as a promising theoretical basis. The descriptive formalism of Construction Grammar seems especially well suited for the task. However, now productivity becomes a problem: it is not nearly as clear-cut as the latter theory often assumes, and this is even more apparent in names than in more traditional linguistic material. The varying degree of productivity is most naturally described by a prototype-based theory. Such an approach, however, requires some adjustments to onstruction Grammar. Based on all this, the thesis proposes a descriptive model where a new name -- or more generally, a new linguistic expression -- can be formed by conceptual integration from either a single prior example or a construction generalised from a number of different prior ones. The new model accounts nicely for various aspects of naming that are problematic for the traditional description based on analogy and patterns.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.
Resumo:
The research reported in this thesis dealt with single crystals of thallium bromide grown for gamma-ray detector applications. The crystals were used to fabricate room temperature gamma-ray detectors. Routinely produced TlBr detectors often are poor quality. Therefore, this study concentrated on developing the manufacturing processes for TlBr detectors and methods of characterisation that can be used for optimisation of TlBr purity and crystal quality. The processes under concern were TlBr raw material purification, crystal growth, annealing and detector fabrication. The study focused on single crystals of TlBr grown from material purified by a hydrothermal recrystallisation method. In addition, hydrothermal conditions for synthesis, recrystallisation, crystal growth and annealing of TlBr crystals were examined. The final manufacturing process presented in this thesis deals with TlBr material purified by the Bridgman method. Then, material is hydrothermally recrystallised in pure water. A travelling molten zone (TMZ) method is used for additional purification of the recrystallised product and then for the final crystal growth. Subsequent processing is similar to that described in the literature. In this thesis, literature on improving quality of TlBr material/crystal and detector performance is reviewed. Aging aspects as well as the influence of different factors (temperature, time, electrode material and so on) on detector stability are considered and examined. The results of the process development are summarised and discussed. This thesis shows the considerable improvement in the charge carrier properties of a detector due to additional purification by hydrothermal recrystallisation. As an example, a thick (4 mm) TlBr detector produced by the process was fabricated and found to operate successfully in gamma-ray detection, confirming the validity of the proposed purification and technological steps. However, for the complete improvement of detector performance, further developments in crystal growth are required. The detector manufacturing process was optimized by characterisation of material and crystals using methods such as X-ray diffraction (XRD), polarisation microscopy, high-resolution inductively coupled plasma mass (HR-ICPM), Fourier transform infrared (FTIR), ultraviolet and visual (UV-Vis) spectroscopy, field emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDS), current-voltage (I-V) and capacity voltage (CV) characterisation, and photoconductivity, as well direct detector examination.
Resumo:
For most RNA viruses RNA-dependent RNA polymerases (RdRPs) encoded by the virus are responsible for the entire RNA metabolism. Thus, RdRPs are critical components in the viral life cycle. However, it is not fully understood how these important enzymes function during viral replication. Double-stranded RNA (dsRNA) viruses perform the synthesis of their RNA genome within a proteinacous viral particle containing an RdRP as a minor constituent. The phi6 bacteriophage is the best-studied dsRNA virus, providing an excellent background for studies of its RNA synthesis. The purified recombinant phi6 RdRP is highly active in vitro and it possesses both RNA replication and transcription activities. The crystal structure of the phi6 polymerase, solved in complex with a number of ligands, provides a working model for detailed in vitro studies of RNA-dependent RNA polymerization. In this thesis, the primer-independent initiation of the phi6 RdRP was studied in vitro using biochemical and structural methods. A C-terminal, four-amino-acid-long loop protruding into the central cavity of the phi6 RdRP has been suggested to stabilize the incoming nucleotides of the initiation complex formation through stacking interactions. A similar structural element has been found from several other viral RdRPs. In this thesis, this so-called initiation platform loop was subjected to site-directed mutagenesis to address its role in the initiation. It was found that the initiation mode of the mutants is primer-dependent, requiring either an oligonucleotide primer or a back-priming initiation mechanism for the RNA synthesis. The crystal structure of a mutant RdRP with altered initiation platform revealed a set of contacts important for primer-independent initiation. Since phi6 RdRP is structurally and functionally homologous to several viral RdRPs, among them the hepatitis C virus RdRP, these results provide further general insight to understand primer-independent initiation. In this study it is demonstrated that manganese phasing could be used as a practical tool for solving structures of large proteins with a bound manganese ion. The phi6 RdRP was used as a case study to obtain phases for crystallographic analysis. Manganese ions are naturally bound to the phi6 RdRP at the palm domain of the enzyme. In a crystallographic experiment, X-ray diffraction data from a phi6 RdRP crystal were collected at a wavelength of 1.89 Å, which is the K edge of manganese. With this data an automatically built model of the core region of the protein could be obtained. Finally, in this work terminal nucleotidyl transferase (TNTase) activity of the phi6 RdRP was documented in the isolated polymerase as well as in the viral particle. This is the first time that such an activity has been reported in a polymerase of a dsRNA virus. The phi6 RdRP used uridine triphosphates as the sole substrate in a TNTase reaction but could accept several heterologous templates. The RdRP was able to add one or a few non-templated nucleotides to the 3' end of the single- or double-stranded RNA substrate. Based on the results on particle-mediated TNTase activity and previous structural information of the polymerase, a model for termination of the RNA-dependent RNA synthesis is suggested in this thesis.
Resumo:
Evolutionary genetics incorporates traditional population genetics and studies of the origins of genetic variation by mutation and recombination, and the molecular evolution of genomes. Among the primary forces that have potential to affect the genetic variation within and among populations, including those that may lead to adaptation and speciation, are genetic drift, gene flow, mutations and natural selection. The main challenges in knowing the genetic basis of evolutionary changes is to distinguish the adaptive selection forces that cause existent DNA sequence variants and also to identify the nucleotide differences responsible for the observed phenotypic variation. To understand the effects of various forces, interpretation of gene sequence variation has been the principal basis of many evolutionary genetic studies. The main aim of this thesis was to assess different forms of teleost gene sequence polymorphisms in evolutionary genetic studies of Atlantic salmon (Salmo salar) and other species. Firstly, the level of Darwinian adaptive evolution affected coding regions of the growth hormone (GH) gene during the teleost evolution was investigated based on the sequence data existing in public databases. Secondly, a target gene approach was used to identify within population variation in the growth hormone 1 (GH1) gene in salmon. Then, a new strategy for single nucleotide polymorphisms (SNPs) discovery in salmonid fishes was introduced, and, finally, the usefulness of a limited number of SNP markers as molecular tools in several applications of population genetics in Atlantic salmon was assessed. This thesis showed that the gene sequences in databases can be utilized to perform comparative studies of molecular evolution, and some putative evidence of the existence of Darwinian selection during the teleost GH evolution was presented. In addition, existent sequence data was exploited to investigate GH1 gene variation within Atlantic salmon populations throughout its range. Purifying selection is suggested to be the predominant evolutionary force controlling the genetic variation of this gene in salmon, and some support for gene flow between continents was also observed. The novel approach to SNP discovery in species with duplicated genome fragments introduced here proved to be an effective method, and this may have several applications in evolutionary genetics with different species - e.g. when developing gene-targeted markers to investigate quantitative genetic variation. The thesis also demonstrated that only a few SNPs performed highly similar signals in some of the population genetic analyses when compared with the microsatellite markers. This may have useful applications when estimating genetic diversity in genes having a potential role in ecological and conservation issues, or when using hard biological samples in genetic studies as SNPs can be applied with relatively highly degraded DNA.
Resumo:
The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.
Resumo:
The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.