21 resultados para strip detectors

em Helda - Digital Repository of University of Helsinki


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The structure and operation of CdTe, CdZnTe and Si pixel detectors based on crystalline semiconductors, bump bonding and CMOS technology and developed mainly at Oy Simage Ltd. And Oy Ajat Ltd., Finland for X- and gamma ray imaging are presented. This detector technology evolved from the development of Si strip detectors at the Finnish Research Institute for High Energy Physics (SEFT) which later merged with other physics research units to form the Helsinki Institute of Physics (HIP). General issues of X-ray imaging such as the benefits of the method of direct conversion of X-rays to signal charge in comparison to the indirect method and the pros and cons of photon counting vs. charge integration are discussed. A novel design of Si and CdTe pixel detectors and the analysis of their imaging performance in terms of SNR, MTF, DQE and dynamic range are presented in detail. The analysis shows that directly converting crystalline semiconductor pixel detectors operated in the charge integration mode can be used in X-ray imaging very close to the theoretical performance limits in terms of efficiency and resolution. Examples of the application of the developed imaging technology to dental intra oral and panoramic and to real time X-ray imaging are given. A CdTe photon counting gamma imager is introduced. A physical model to calculate the photo peak efficiency of photon counting CdTe pixel detectors is developed and described in detail. Simulation results indicates that the charge sharing phenomenon due to diffusion of signal charge carriers limits the pixel size of photon counting detectors to about 250 μm. Radiation hardness issues related to gamma and X-ray imaging detectors are discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Silicon strip detectors are fast, cost-effective and have an excellent spatial resolution. They are widely used in many high-energy physics experiments. Modern high energy physics experiments impose harsh operation conditions on the detectors, e.g., of LHC experiments. The high radiation doses cause the detectors to eventually fail as a result of excessive radiation damage. This has led to a need to study radiation tolerance using various techniques. At the same time, a need to operate sensors approaching the end their lifetimes has arisen. The goal of this work is to demonstrate that novel detectors can survive the environment that is foreseen for future high-energy physics experiments. To reach this goal, measurement apparatuses are built. The devices are then used to measure the properties of irradiated detectors. The measurement data are analyzed, and conclusions are drawn. Three measurement apparatuses built as a part of this work are described: two telescopes measuring the tracks of the beam of a particle accelerator and one telescope measuring the tracks of cosmic particles. The telescopes comprise layers of reference detectors providing the reference track, slots for the devices under test, the supporting mechanics, electronics, software, and the trigger system. All three devices work. The differences between these devices are discussed. The reconstruction of the reference tracks and analysis of the device under test are presented. Traditionally, silicon detectors have produced a very clear response to the particles being measured. In the case of detectors nearing the end of their lifefimes, this is no longer true. A new method benefitting from the reference tracks to form clusters is presented. The method provides less biased results compared to the traditional analysis, especially when studying the response of heavily irradiated detectors. Means to avoid false results in demonstrating the particle-finding capabilities of a detector are also discussed. The devices and analysis methods are primarily used to study strip detectors made of Magnetic Czochralski silicon. The detectors studied were irradiated to various fluences prior to measurement. The results show that Magnetic Czochralski silicon has a good radiation tolerance and is suitable for future high-energy physics experiments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 micron in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10^5 charged tracks from cosmic rays that have been collected during summer 2008, with the ALICE solenoidal magnet switched off.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Silicon particle detectors are used in several applications and will clearly require better hardness against particle radiation in the future large scale experiments than can be provided today. To achieve this goal, more irradiation studies with defect generating bombarding particles are needed. Protons can be considered as important bombarding species, although neutrons and electrons are perhaps the most widely used particles in such irradiation studies. Protons provide unique possibilities, as their defect production rates are clearly higher than those of neutrons and electrons, and, their damage creation in silicon is most similar to the that of pions. This thesis explores the development and testing of an irradiation facility that provides the cooling of the detector and on-line electrical characterisation, such as current-voltage (IV) and capacitance-voltage (CV) measurements. This irradiation facility, which employs a 5-MV tandem accelerator, appears to function well, but some disadvantageous limitations are related to MeV-proton irradiation of silicon particle detectors. Typically, detectors are in non-operational mode during irradiation (i.e., without the applied bias voltage). However, in real experiments the detectors are biased; the ionising proton generates electron-hole pairs, and a rise in rate of proton flux may cause the detector to breakdown. This limits the proton flux for the irradiation of biased detectors. In this work, it is shown that, if detectors are irradiated and kept operational, the electric field decreases the introduction rate of negative space-charges and current-related damage. The effects of various particles with different energies are scaled to each others by the non-ionising energy loss (NIEL) hypothesis. The type of defects induced by irradiation depends on the energy used, and this thesis also discusses the minimum proton energy required at which the NIEL-scaling is valid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Buffer zones are vegetated strip-edges of agricultural fields along watercourses. As linear habitats in agricultural ecosystems, buffer strips dominate and play a leading ecological role in many areas. This thesis focuses on the plant species diversity of the buffer zones in a Finnish agricultural landscape. The main objective of the present study is to identify the determinants of floral species diversity in arable buffer zones from local to regional levels. This study was conducted in a watershed area of a farmland landscape of southern Finland. The study area, Lepsämänjoki, is situated in the Nurmijärvi commune 30 km to the north of Helsinki, Finland. The biotope mosaics were mapped in GIS. A total of 59 buffer zones were surveyed, of which 29 buffer strips surveyed were also sampled by plot. Firstly, two diversity components (species richness and evenness) were investigated to determine whether the relationship between the two is equal and predictable. I found no correlation between species richness and evenness. The relationship between richness and evenness is unpredictable in a small-scale human-shaped ecosystem. Ordination and correlation analyses show that richness and evenness may result from different ecological processes, and thus should be considered separately. Species richness correlated negatively with phosphorus content, and species evenness correlated negatively with the ratio of organic carbon to total nitrogen in soil. The lack of a consistent pattern in the relationship between these two components may be due to site-specific variation in resource utilization by plant species. Within-habitat configuration (width, length, and area) were investigated to determine which is more effective for predicting species richness. More species per unit area increment could be obtained from widening the buffer strip than from lengthening it. The width of the strips is an effective determinant of plant species richness. The increase in species diversity with an increase in the width of buffer strips may be due to cross-sectional habitat gradients within the linear patches. This result can serve as a reference for policy makers, and has application value in agricultural management. In the framework of metacommunity theory, I found that both mass effect(connectivity) and species sorting (resource heterogeneity) were likely to explain species composition and diversity on a local and regional scale. The local and regional processes were interactively dominated by the degree to which dispersal perturbs local communities. In the lowly and intermediately connected regions, species sorting was of primary importance to explain species diversity, while the mass effect surpassed species sorting in the highly connected region. Increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities, and consequently, to lower regional diversity, while local species richness was unrelated to the habitat connectivity. Of all species found, Anthriscus sylvestris, Phalaris arundinacea, and Phleum pretense significantly responded to connectivity, and showed high abundance in the highly connected region. We suggest that these species may play a role in switching the force from local resources to regional connectivity shaping the community structure. On the landscape context level, the different responses of local species richness and evenness to landscape context were investigated. Seven landscape structural parameters served to indicate landscape context on five scales. On all scales but the smallest scales, the Shannon-Wiener diversity of land covers (H') correlated positively with the local richness. The factor (H') showed the highest correlation coefficients in species richness on the second largest scale. The edge density of arable field was the only predictor that correlated with species evenness on all scales, which showed the highest predictive power on the second smallest scale. The different predictive power of the factors on different scales showed a scaledependent relationship between the landscape context and local plant species diversity, and indicated that different ecological processes determine species richness and evenness. The local richness of species depends on a regional process on large scales, which may relate to the regional species pool, while species evenness depends on a fine- or coarse-grained farming system, which may relate to the patch quality of the habitats of field edges near the buffer strips. My results suggested some guidelines of species diversity conservation in the agricultural ecosystem. To maintain a high level of species diversity in the strips, a high level of phosphorus in strip soil should be avoided. Widening the strips is the most effective mean to improve species richness. Habitat connectivity is not always favorable to species diversity because increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities (beta diversity) and, consequently, to lower regional diversity. Overall, a synthesis of local and regional factors emerged as the model that best explain variations in plant species diversity. The studies also suggest that the effects of determinants on species diversity have a complex relationship with scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research reported in this thesis dealt with single crystals of thallium bromide grown for gamma-ray detector applications. The crystals were used to fabricate room temperature gamma-ray detectors. Routinely produced TlBr detectors often are poor quality. Therefore, this study concentrated on developing the manufacturing processes for TlBr detectors and methods of characterisation that can be used for optimisation of TlBr purity and crystal quality. The processes under concern were TlBr raw material purification, crystal growth, annealing and detector fabrication. The study focused on single crystals of TlBr grown from material purified by a hydrothermal recrystallisation method. In addition, hydrothermal conditions for synthesis, recrystallisation, crystal growth and annealing of TlBr crystals were examined. The final manufacturing process presented in this thesis deals with TlBr material purified by the Bridgman method. Then, material is hydrothermally recrystallised in pure water. A travelling molten zone (TMZ) method is used for additional purification of the recrystallised product and then for the final crystal growth. Subsequent processing is similar to that described in the literature. In this thesis, literature on improving quality of TlBr material/crystal and detector performance is reviewed. Aging aspects as well as the influence of different factors (temperature, time, electrode material and so on) on detector stability are considered and examined. The results of the process development are summarised and discussed. This thesis shows the considerable improvement in the charge carrier properties of a detector due to additional purification by hydrothermal recrystallisation. As an example, a thick (4 mm) TlBr detector produced by the process was fabricated and found to operate successfully in gamma-ray detection, confirming the validity of the proposed purification and technological steps. However, for the complete improvement of detector performance, further developments in crystal growth are required. The detector manufacturing process was optimized by characterisation of material and crystals using methods such as X-ray diffraction (XRD), polarisation microscopy, high-resolution inductively coupled plasma mass (HR-ICPM), Fourier transform infrared (FTIR), ultraviolet and visual (UV-Vis) spectroscopy, field emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDS), current-voltage (I-V) and capacity voltage (CV) characterisation, and photoconductivity, as well direct detector examination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation deals with remote narrowband measurements of the electromagnetic radiation emitted by lightning flashes. A lightning flash consists of a number of sub-processes. The return stroke, which transfers electrical charge from the thundercloud to to the ground, is electromagnetically an impulsive wideband process; that is, it emits radiation at most frequencies in the electromagnetic spectrum, but its duration is only some tens of microseconds. Before and after the return stroke, multiple sub-processes redistribute electrical charges within the thundercloud. These sub-processes can last for tens to hundreds of milliseconds, many orders of magnitude longer than the return stroke. Each sub-process causes radiation with specific time-domain characteristics, having maxima at different frequencies. Thus, if the radiation is measured at a single narrow frequency band, it is difficult to identify the sub-processes, and some sub-processes can be missed altogether. However, narrowband detectors are simple to design and miniaturize. In particular, near the High Frequency band (High Frequency, 3 MHz to 30 MHz), ordinary shortwave radios can, in principle, be used as detectors. This dissertation utilizes a prototype detector which is essentially a handheld AM radio receiver. Measurements were made in Scandinavia, and several independent data sources were used to identify lightning sub-processes, as well as the distance to each individual flash. It is shown that multiple sub-processes radiate strongly near the HF band. The return stroke usually radiates intensely, but it cannot be reliably identified from the time-domain signal alone. This means that a narrowband measurement is best used to characterize the energy of the radiation integrated over the whole flash, without attempting to identify individual processes. The dissertation analyzes the conditions under which this integrated energy can be used to estimate the distance to the flash. It is shown that flash-by-flash variations are large, but the integrated energy is very sensitive to changes in the distance, dropping as approximately the inverse cube root of the distance. Flashes can, in principle, be detected at distances of more than 100 km, but since the ground conductivity can vary, ranging accuracy drops dramatically at distances larger than 20 km. These limitations mean that individual flashes cannot be ranged accurately using a single narrowband detector, and the useful range is limited to 30 kilometers at the most. Nevertheless, simple statistical corrections are developed, which enable an accurate estimate of the distance to the closest edge of an active storm cell, as well as the approach speed. The results of the dissertation could therefore have practical applications in real-time short-range lightning detection and warning systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Boron neutron capture therapy (BNCT) is a form of chemically targeted radiotherapy that utilises the high neutron capture cross-section of boron-10 isotope to achieve a preferential dose increase in the tumour. The BNCT dosimetry poses a special challenge as the radiation dose absorbed by the irradiated tissues consists of several dose different components. Dosimetry is important as the effect of the radiation on the tissue is correlated with the radiation dose. Consistent and reliable radiation dose delivery and dosimetry are thus basic requirements for radiotherapy. The international recommendations for are not directly applicable to BNCT dosimetry. The existing dosimetry guidance for BNCT provides recommendations but also calls for investigating for complementary methods for comparison and improved accuracy. In this thesis the quality assurance and stability measurements of the neutron beam monitors used in dose delivery are presented. The beam monitors were found not to be affected by the presence of a phantom in the beam and that the effect of the reactor core power distribution was less than 1%. The weekly stability test with activation detectors has been generally reproducible within the recommended tolerance value of 2%. An established toolkit for epithermal neutron beams for determination of the dose components is presented and applied in an international dosimetric intercomparison. The measured quantities (neutron flux, fast neutron and photon dose) by the groups in the intercomparison were generally in agreement within the stated uncertainties. However, the uncertainties were large, ranging from 3-30% (1 standard deviation), emphasising the importance of dosimetric intercomparisons if clinical data is to be compared between different centers. Measurements with the Exradin type 2M ionisation chamber have been repeated in the epithermal neutron beam in the same measurement configuration over the course of 10 years. The presented results exclude severe sensitivity changes to thermal neutrons that have been reported for this type of chamber. Microdosimetry and polymer gel dosimetry as complementary methods for epithermal neutron beam dosimetry are studied. For microdosimetry the comparison of results with ionisation chambers and computer simulation showed that the photon dose measured with microdosimetry was lower than with the two other methods. The disagreement was within the uncertainties. For neutron dose the simulation and microdosimetry results agreed within 10% while the ionisation chamber technique gave 10-30% lower neutron dose rates than the two other methods. The response of the BANG-3 gel was found to be linear for both photon and epithermal neutron beam irradiation. The dose distribution normalised to dose maximum measured by MAGIC polymer gel was found to agree well with the simulated result near the dose maximum while the spatial difference between measured and simulated 30% isodose line was more than 1 cm. In both the BANG-3 and MAGIC gel studies, the interpretation of the results was complicated by the presence of high-LET radiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solar flares were first observed by plain eye in white light by William Carrington in England in 1859. Since then these eruptions in the solar corona have intrigued scientists. It is known that flares influence the space weather experienced by the planets in a multitude of ways, for example by causing aurora borealis. Understanding flares is at the epicentre of human survival in space, as astronauts cannot survive the highly energetic particles associated with large flares in high doses without contracting serious radiation disease symptoms, unless they shield themselves effectively during space missions. Flares may be at the epicentre of man s survival in the past as well: it has been suggested that giant flares might have played a role in exterminating many of the large species on Earth, including dinosaurs. Having said that prebiotic synthesis studies have shown lightning to be a decisive requirement for amino acid synthesis on the primordial Earth. Increased lightning activity could be attributed to space weather, and flares. This thesis studies flares in two ways: in the spectral and the spatial domain. We have extracted solar spectra using three different instruments, namely GOES (Geostationary Operational Environmental Satellite), RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) and XSM (X-ray Solar Monitor) for the same flares. The GOES spectra are low resolution obtained with a gas proportional counter, the RHESSI spectra are higher resolution obtained with Germanium detectors and the XSM spectra are very high resolution observed with a silicon detector. It turns out that the detector technology and response influence the spectra we see substantially, and are important to understanding what conclusions to draw from the data. With imaging data, there was not such a luxury of choice available. We used RHESSI imaging data to observe the spatial size of solar flares. In the present work the focus was primarily on current solar flares. However, we did make use of our improved understanding of solar flares to observe young suns in NGC 2547. The same techniques used with solar monitors were applied with XMM-Newton, a stellar X-ray monitor, and coupled with ground based Halpha observations these techniques yielded estimates for flare parameters in young suns. The material in this thesis is therefore structured from technology to application, covering the full processing path from raw data and detector responses to concrete physical parameter results, such as the first measurement of the length of plasma flare loops in young suns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first observations of solar X-rays date back to late 1940 s. In order to observe solar X-rays the instruments have to be lifted above the Earth s atmosphere, since all high energy radiation from the space is almost totally attenuated by it. This is a good thing for all living creatures, but bad for X-ray astronomers. Detectors observing X-ray emission from space must be placed on-board satellites, which makes this particular discipline of astronomy technologically and operationally demanding, as well as very expensive. In this thesis, I have focused on detectors dedicated to observing solar X-rays in the energy range 1-20 keV. The purpose of these detectors was to measure solar X-rays simultaneously with another X-ray spectrometer measuring fluorescence X-ray emission from the Moon surface. The X-ray fluorescence emission is induced by the primary solar X-rays. If the elemental abundances on the Moon were to be determined with fluorescence analysis methods, the shape and intensity of the simultaneous solar X-ray spectrum must be known. The aim of this thesis is to describe the characterization and operation of our X-ray instruments on-board two Moon missions, SMART-1 and Chandrayaan-1. Also the independent solar science performance of these two almost similar X-ray spectrometers is described. These detectors have the following two features in common. Firstly, the primary detection element is made of a single crystal silicon diode. Secondly, the field of view is circular and very large. The data obtained from these detectors are spectra with a 16 second time resolution. Before launching an instrument into space, its performance must be characterized by ground calibrations. The basic operation of these detectors and their ground calibrations are described in detail. Two C-flares are analyzed as examples for introducing the spectral fitting process. The first flare analysis shows the fit of a single spectrum of the C1-flare obtained during the peak phase. The other analysis example shows how to derive the time evolution of fluxes, emission measures (EM) and temperatures through the whole single C4 flare with the time resolution of 16 s. The preparatory data analysis procedures are also introduced in detail. These are required in spectral fittings of the data. A new solar monitor design equipped with a concentrator optics and a moderate size of field of view is also introduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article discusses the physics programme of the TOTEM experiment at the LHC. A new special beam optics with beta* = 90 m, enabling the measurements of the total cross-section, elastic pp scattering and diffractive phenomena already at early LHC runs, is explained. For this and the various other TOTEM running scenarios, the acceptances of the leading proton detectors and of the forward tracking stations for some physics processes are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.