31 resultados para effective linear solver
em Helda - Digital Repository of University of Helsinki
Resumo:
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.
Resumo:
Effective processing of powdered particles can facilitate powder handling and result in better drug product performance, which is of great importance in the pharmaceutical industry where the majority of active pharmaceutical ingredients (APIs) are delivered as solid dosage forms. The purpose of this work was to develop a new ultrasound-assisted method for particle surface modification and thin-coating of pharmaceutical powders. The ultrasound was used to produce an aqueous mist with or without a coating agent. By using the proposed technique, it was possible to decrease the interparticular interactions and improve rheological properties of poorly-flowing water-soluble powders by aqueous smoothing of the rough surfaces of irregular particles. In turn, hydrophilic polymer thin-coating of a hydrophobic substance diminished the triboelectrostatic charge transfer and improved the flowability of highly cohesive powder. To determine the coating efficiency of the technique, the bioactive molecule β-galactosidase was layered onto the surface of powdered lactose particles. Enzyme-treated materials were analysed by assaying the quantity of the reaction product generated during enzymatic cleavage of the milk sugar. A near-linear increase in the thickness of the drug layer was obtained during progressive treatment. Using the enzyme coating procedure, it was confirmed that the ultrasound-assisted technique is suitable for processing labile protein materials. In addition, this pre-treatment of milk sugar could be used to improve utilization of lactose-containing formulations for populations suffering from severe lactose intolerance. Furthermore, the applicability of the thin-coating technique for improving homogeneity of low-dose solid dosage forms was shown. The carrier particles coated with API gave rise to uniform distribution of the drug within the powder. The mixture remained homogeneous during further tabletting, whereas the reference physical powder mixture was subject to segregation. In conclusion, ultrasound-assisted surface engineering of pharmaceutical powders can be effective technology for improving formulation and performance of solid dosage forms such as dry powder inhalers (DPI) and direct compression products.
Resumo:
Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
The effect of temperature on height growth of Scots pine in the northern boreal zone in Lapland was studied in two different time scales. Intra-annual growth was monitored in four stands in up to four growing seasons using an approximately biweekly measurement interval. Inter-annual growth was studied using growth records representing seven stands and five geographical locations. All the stands were growing on a dry to semi-dry heath that is a typical site type for pine stands in Finland. The applied methodology is based on applied time-series analysis and multilevel modelling. Intra-annual elongation of the leader shoot correlated with temperature sum accumulation. Height growth ceased when, on average, 41% of the relative temperature sum of the site was achieved (observed minimum and maximum were 38% and 43%). The relative temperature sum was calculated by dividing the actual temperature sum by the long-term mean of the total annual temperature sum for the site. Our results suggest that annual height growth ceases when a location-specific temperature sum threshold is attained. The positive effect of the mean July temperature of the previous year on annual height increment proved to be very strong at high latitudes. The mean November temperature of the year before the previous had a statistically significantly effect on height increment in the three northernmost stands. The effect of mean monthly precipitation on annual height growth was statistically insignificant. There was a non-linear dependence between length and needle density of annual shoots. Exceptionally low height growth results in high needle-density, but the effect is weaker in years of average or good height growth. Radial growth and next year s height growth are both largely controlled by current July temperature. Nevertheless, their growth variation in terms of minimum and maximum is not necessarily strongly correlated. This is partly because height growth is more sensitive to changes in temperature. In addition, the actual effective temperature period is not exactly the same for these two growth components. Yet, there is a long-term balance that was also statistically distinguishable; radial growth correlated significantly with height growth with a lag of 2 years. Temperature periods shorter than a month are more effective variables than mean monthly values, but the improvement is on the scale of modest to good when applying Julian days or growing-degree-days as pointers.
Resumo:
Buffer zones are vegetated strip-edges of agricultural fields along watercourses. As linear habitats in agricultural ecosystems, buffer strips dominate and play a leading ecological role in many areas. This thesis focuses on the plant species diversity of the buffer zones in a Finnish agricultural landscape. The main objective of the present study is to identify the determinants of floral species diversity in arable buffer zones from local to regional levels. This study was conducted in a watershed area of a farmland landscape of southern Finland. The study area, Lepsämänjoki, is situated in the Nurmijärvi commune 30 km to the north of Helsinki, Finland. The biotope mosaics were mapped in GIS. A total of 59 buffer zones were surveyed, of which 29 buffer strips surveyed were also sampled by plot. Firstly, two diversity components (species richness and evenness) were investigated to determine whether the relationship between the two is equal and predictable. I found no correlation between species richness and evenness. The relationship between richness and evenness is unpredictable in a small-scale human-shaped ecosystem. Ordination and correlation analyses show that richness and evenness may result from different ecological processes, and thus should be considered separately. Species richness correlated negatively with phosphorus content, and species evenness correlated negatively with the ratio of organic carbon to total nitrogen in soil. The lack of a consistent pattern in the relationship between these two components may be due to site-specific variation in resource utilization by plant species. Within-habitat configuration (width, length, and area) were investigated to determine which is more effective for predicting species richness. More species per unit area increment could be obtained from widening the buffer strip than from lengthening it. The width of the strips is an effective determinant of plant species richness. The increase in species diversity with an increase in the width of buffer strips may be due to cross-sectional habitat gradients within the linear patches. This result can serve as a reference for policy makers, and has application value in agricultural management. In the framework of metacommunity theory, I found that both mass effect(connectivity) and species sorting (resource heterogeneity) were likely to explain species composition and diversity on a local and regional scale. The local and regional processes were interactively dominated by the degree to which dispersal perturbs local communities. In the lowly and intermediately connected regions, species sorting was of primary importance to explain species diversity, while the mass effect surpassed species sorting in the highly connected region. Increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities, and consequently, to lower regional diversity, while local species richness was unrelated to the habitat connectivity. Of all species found, Anthriscus sylvestris, Phalaris arundinacea, and Phleum pretense significantly responded to connectivity, and showed high abundance in the highly connected region. We suggest that these species may play a role in switching the force from local resources to regional connectivity shaping the community structure. On the landscape context level, the different responses of local species richness and evenness to landscape context were investigated. Seven landscape structural parameters served to indicate landscape context on five scales. On all scales but the smallest scales, the Shannon-Wiener diversity of land covers (H') correlated positively with the local richness. The factor (H') showed the highest correlation coefficients in species richness on the second largest scale. The edge density of arable field was the only predictor that correlated with species evenness on all scales, which showed the highest predictive power on the second smallest scale. The different predictive power of the factors on different scales showed a scaledependent relationship between the landscape context and local plant species diversity, and indicated that different ecological processes determine species richness and evenness. The local richness of species depends on a regional process on large scales, which may relate to the regional species pool, while species evenness depends on a fine- or coarse-grained farming system, which may relate to the patch quality of the habitats of field edges near the buffer strips. My results suggested some guidelines of species diversity conservation in the agricultural ecosystem. To maintain a high level of species diversity in the strips, a high level of phosphorus in strip soil should be avoided. Widening the strips is the most effective mean to improve species richness. Habitat connectivity is not always favorable to species diversity because increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities (beta diversity) and, consequently, to lower regional diversity. Overall, a synthesis of local and regional factors emerged as the model that best explain variations in plant species diversity. The studies also suggest that the effects of determinants on species diversity have a complex relationship with scale.
Resumo:
Environmentally benign and economical methods for the preparation of industrially important hydroxy acids and diacids were developed. The carboxylic acids, used in polyesters, alkyd resins, and polyamides, were obtained by the oxidation of the corresponding alcohols with hydrogen peroxide or air catalyzed by sodium tungstate or supported noble metals. These oxidations were carried out using water as a solvent. The alcohols are also a useful alternative to the conventional reactants, hydroxyaldehydes and cycloalkanes. The oxidation of 2,2-disubstituted propane-1,3-diols with hydrogen peroxide catalyzed by sodium tungstate afforded 2,2-disubstituted 3-hydroxypropanoic acids and 1,1-disubstituted ethane-1,2-diols as products. A computational study of the Baeyer-Villiger rearrangement of the intermediate 2,2-disubstituted 3-hydroxypropanals gave in-depth data of the mechanism of the reaction. Linear primary diols having chain length of at least six carbons were easily oxidized with hydrogen peroxide to linear dicarboxylic acids catalyzed by sodium tungstate. The Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols and linear primary diols afforded the highest yield of the corresponding hydroxy acids, while the Pt, Bi/C catalyzed oxidation of the diols afforded the highest yield of the corresponding diacids. The mechanism of the promoted oxidation was best described by the ensemble effect, and by the formation of a complex of the hydroxy and the carboxy groups of the hydroxy acids with bismuth atoms. The Pt, Bi/C catalyzed air oxidation of 2-substituted 2-hydroxymethylpropane-1,3-diols gave 2-substituted malonic acids by the decarboxylation of the corresponding triacids. Activated carbon was the best support and bismuth the most efficient promoter in the air oxidation of 2,2-dialkylpropane-1,3-diols to diacids. In oxidations carried out in organic solvents barium sulfate could be a valuable alternative to activated carbon as a non-flammable support. In the Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols to 2,2-disubstituted 3-hydroxypropanoic acids the small size of the 2-substituents enhanced the rate of the oxidation. When the potential of platinum of the catalyst was not controlled, the highest yield of the diacids in the Pt, Bi/C catalyzed air oxidation of 2,2-dialkylpropane-1,3-diols was obtained in the regime of mass transfer. The most favorable pH of the reaction mixture of the promoted oxidation was 10. The reaction temperature of 40°C prevented the decarboxylation of the diacids.
Resumo:
We solve the Dynamic Ehrenfeucht-Fra\"iss\'e Game on linear orders for both players, yielding a normal form for quantifier-rank equivalence classes of linear orders in first-order logic, infinitary logic, and generalized-infinitary logics with linearly ordered clocks. We show that Scott Sentences can be manipulated quickly, classified into local information, and consistency can be decided effectively in the length of the Scott Sentence. We describe a finite set of linked automata moving continuously on a linear order. Running them on ordinals, we compute the ordinal truth predicate and compute truth in the constructible universe of set-theory. Among the corollaries are a study of semi-models as efficient database of both model-theoretic and formulaic information, and a new proof of the atomicity of the Boolean algebra of sentences consistent with the theory of linear order -- i.e., that the finitely axiomatized theories of linear order are dense.
Resumo:
The significance of carbohydrate-protein interactions in many biological phenomena is now widely acknowledged and carbohydrate based pharmaceuticals are under intensive development. The interactions between monomeric carbohydrate ligands and their receptors are usually of low affinity. To overcome this limitation natural carbohydrate ligands are often organized as multivalent structures. Therefore, artificial carbohydrate pharmaceuticals should be constructed on the same concept, as multivalent carbohydrates or glycoclusters. Infections of specific host tissues by bacteria, viruses, and fungi are among the unfavorable disease processes for which suitably designed carbohydrate inhibitors represent worthy targets. The bacterium Helicobacter pylori colonizes more than half of all people worldwide, causing gastritis, gastric ulcer, and conferring a greater risk of stomach cancer. The present medication therapy for H. pylori includes the use of antibiotics, which is associated with increasing incidence of bacterial resistance to traditional antibiotics. Therefore, the need for an alternative treatment method is urgent. In this study, four novel synthesis procedures of multivalent glycoconjugates were created. Three different scaffolds representing linear (chondroitin oligomer), cyclic (γ-cyclodextrin), and globular (dendrimer) molecules were used. Multivalent conjugates were produced using the human milk type oligosaccharides LNDFH I (Lewis-b hexasaccharide), LNnT (Galβ1-4GlcNAcβ1-3Galβ1-4Glc), and GlcNAcβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glc all representing analogues of the tissue binding epitopes for H. pylori. The first synthetic method included the reductive amination of scaffold molecules modified to express primary amine groups, and in the case of dendrimer direct amination to scaffold molecule presenting 64 primary amine groups. The second method described a direct procedure for amidation of glycosylamine modified oligosaccharides to scaffold molecules presenting carboxyl groups. The final two methods that were created both included an oxime-linkage on linkers of different length. All the new synthetic procedures synthesized had the advantage of using unmodified reducing sugars as starting material making it easy to synthesize glycoconjugates of different specificity. In addition, the binding activity of an array of neoglycolipids to H. pylori was studied. Consequently, two new neolacto-based structures, Glcβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glcβ1-Cer and GlcAβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glcβ1-Cer, with binding activity toward H. pylori were discovered. Interestingly, N-methyl and N-ethyl amide modification of the GlcAβ1-3Galβ1-4GlcNAcβ1-3Galβ1-4Glcβ1-Cer glucuronic acid residue resulted in more effective H. pylori binding epitopes than the parent molecule.
Resumo:
Nitrogen (N) and phosphorus (P) are essential elements for all living organisms. However, in excess, they contribute to several environmental problems such as aquatic and terrestrial eutrophication. Globally, human action has multiplied the volume of N and P cycling since the onset of industrialization. The multiplication is a result of intensified agriculture, increased energy consumption and population growth. Industrial ecology (IE) is a discipline, in which human interaction with the ecosystems is investigated using a systems analytical approach. The main idea behind IE is that industrial systems resemble ecosystems, and, like them, industrial systems can then be described using material, energy and information flows and stocks. Industrial systems are dependent on the resources provided by the biosphere, and these two cannot be separated from each other. When studying substance flows, the aims of the research from the viewpoint of IE can be, for instance, to elucidate the ways how the cycles of a certain substance could be more closed and how the flows of a certain substance could be decreased per unit of production (= dematerialization). In Finland, N and P are studied widely in different ecosystems and environmental emissions. A holistic picture comparing different societal systems is, however, lacking. In this thesis, flows of N and P were examined in Finland using substance flow analysis (SFA) in the following four subsystems: I) forest industry and use of wood fuels, II) food production and consumption, III) energy, and IV) municipal waste. A detailed analysis at the end of the 1990s was performed. Furthermore, historical development of the N and P flows was investigated in the energy system (III) and the municipal waste system (IV). The main research sources were official statistics, literature, monitoring data, and expert knowledge. The aim was to identify and quantify the main flows of N and P in Finland in the four subsystems studied. Furthermore, the aim was to elucidate whether the nutrient systems are cyclic or linear, and to identify how these systems could be more efficient in the use and cycling of N and P. A final aim was to discuss how this type of an analysis can be used to support decision-making on environmental problems and solutions. Of the four subsystems, the food production and consumption system and the energy system created the largest N flows in Finland. For the creation of P flows, the food production and consumption system (Paper II) was clearly the largest, followed by the forest industry and use of wood fuels and the energy system. The contribution of Finland to N and P flows on a global scale is low, but when compared on a per capita basis, we are one of the largest producers of these flows, with relatively high energy and meat consumption being the main reasons. Analysis revealed the openness of all four systems. The openness is due to the high degree of internationality of the Finnish markets, the large-scale use of synthetic fertilizers and energy resources and the low recycling rate of many waste fractions. Reduction in the use of fuels and synthetic fertilizers, reorganization of the structure of energy production, reduced human intake of nutrients and technological development are crucial in diminishing the N and P flows. To enhance nutrient recycling and replace inorganic fertilizers, recycling of such wastes as wood ash and sludge could be promoted. SFA is not usually sufficiently detailed to allow specific recommendations for decision-making to be made, but it does yield useful information about the relative magnitude of the flows and may reveal unexpected losses. Sustainable development is a widely accepted target for all human action. SFA is one method that can help to analyse how effective different efforts are in leading to a more sustainable society. SFA's strength is that it allows a holistic picture of different natural and societal systems to be drawn. Furthermore, when the environmental impact of a certain flow is known, the method can be used to prioritize environmental policy efforts.
Resumo:
Thrombophilia (TF) predisposes both to venous and arterial thrombosis at a young age. TF may also impact the thrombosis or stenosis of hemodialysis (HD) vascular access in patients with end-stage renal disease (ESRD). When involved in severe thrombosis TF may associate with inappropriate response to anticoagulation. Lepirudin, a potent direct thrombin inhibitor (DTI), indicated for heparin-induced thrombocytopenia-related thrombosis, could offer a treatment alternative in TF. Monitoring of narrow-ranged lepirudin demands new insights also in laboratory. The above issues constitute the targets in this thesis. We evaluated the prevalence of TF in patients with ESRD and its impact upon thrombosis- or stenosis-free survival of the vascular access. Altogether 237 ESRD patients were prospectively screened for TF and thrombogenic risk factors prior to HD access surgery in 2002-2004 (mean follow-up of 3.6 years). TF was evident in 43 (18%) of the ESRD patients, more often in males (23 vs. 9%, p=0.009). Known gene mutations of FV Leiden and FII G20210A occurred in 4%. Vascular access sufficiently matured in 226 (95%). The 1-year thrombosis- and stenosis-free access survival was 72%. Female gender (hazards ratio, HR, 2.5; 95% CI 1.6-3.9) and TF (HR 1.9, 95% CI 1.1-3.3) were independent risk factors for the shortened thrombosis- and stenosis-free survival. Additionally, TF or thrombogenic background was found in relatively young patients having severe thrombosis either in hepatic veins (Budd-Chiari syndrome, BCS, one patient) or inoperable critical limb ischemia (CLI, six patients). Lepirudin was evaluated in an off-label setting in the severe thrombosis after inefficacious traditional anticoagulation without other treatment options except severe invasive procedures, such as lower extremity amputation. Lepirudin treatments were repeatedly monitored clinically and with laboratory assessments (e.g. activated partial thromboplastin time, APTT). Our preliminary studies with lepirudin in thrombotic calamities appeared safe, and no bleeds occurred. An effective DTI lepirudin calmed thrombosis as all patients gradually recovered. Only one limb amputation was performed 3 years later during the follow-up (mean 4 years). Furthermore, we aimed to overcome the limitations of APTT and confounding effects of warfarin (INR of 1.5-3.9) and lupus anticoagulant (LA). Lepirudin responses were assessed in vitro by five specific laboratory methods. Ecarin chromogenic assay (ECA) or anti-Factor IIa (anti-FIIa) correlated precisely (r=0.99) with each other and with spiked lepirudin in all plasma pools: normal, warfarin, and LA-containing plasma. In contrast, in the presence of warfarin and LA both APTT and prothrombinase-induced clotting time (PiCT®) were limited by non-linear and imprecise dose responses. As a global coagulation test APTT is useful in parallel to the precise chromogenic methods ECA or Anti-FIIa in challenging clinical situations. Lepirudin treatment requires multidisciplinary approach to ensure appropriate patient selection, interpretation of laboratory monitoring, and treatment safety. TF seemed to be associated with complicated thrombotic events, in venous (BCS), arterial (CLI), and vascular access systems. TF screening should be aimed to patients with repeated access complications or prior unprovoked thromboembolic events. Lepirudin inhibits free and clot-bound thrombin which heparin fails to inhibit. Lepirudin seems to offer a potent and safe option for treatment of severe thrombosis. Multi-centered randomized trials are necessary to assess the possible management of complicated thrombotic events with DTIs like lepirudin and seek prevention options against access complications.
Resumo:
The study assessed whether plasma concentrations of complement factors C3, C4, or immunoglobulins, serum classical pathway hemolytyic activity, or polymorphisms in the class I and II HLA genes, isotypes and gene numbers of C4, or allotypes of IgG1 and IgG3 heavy chain genes were associated with severe frequently recurring or chronic mucosal infections. According to strict clinical criteria, 188 consecutive voluntary patients without a known immunodeficiency and 198 control subjects were recruited. Frequencies of low levels in IgG1, IgG2, IgG3 and IgG4 were for the first time tested from adult general population and patients with acute rhinosinusitis. Frequently recurring intraoral herpes simplex type 1 infections, a rare form of the disease, was associated with homozygosity in HLA -A*, -B*, -C*, and -DR* genes. Frequently recurrent genital HSV-2 infections were associated with low levels of IgG1 and IgG3, present in 54% of the recruited patients. This association was partly allotype-dependent. The G3mg,G1ma/ax haplotype, together with low IgG3, was more common in patients than in control subjects who lacked antibodies against herpes simplex viruses. This is the first found immunogenetic deficiency in otherwise healthy adults that predisposes to highly frequent mucosal herpes recurrences. According to previous studies, HSV effectively evades the allotype G1ma/ax of IgG1, whereas G3mg is associated with low IgG3. Certain HLA genes were more common in patients than in control subjects. Having more than one C4A or C4B gene was associated with neuralgias caused by the virus. Low levels of IgA, IgG1, IgG2, IgG3, and IgG4 were common in the general adult population, but even more frequent in patients with chronic sinusitis. Only low IgG1 was more common chronic than in acute rhinosinusitis. Clinically, nasal polyposis and bronchial asthma were associated with complicated disease forms. The best differentiating immunologic parameters were C4A deficiency and the combination of low plasma IgG4 together with low IgG1 or IgG2, performing almost equally. The lack of C4A, IgA, and IgG4, all known to possess anti-inflammatory activity, together with a concurrently impaired immunity caused by low subclass levels, may predispose to chronic disease forms. In severe chronic adult periodontitis, any C4A or C4B deficiency combined was associated with the disease. The new quantitative analysis of C4 genes and the conventional C4 allotyping method complemented each other. Lowered levels of plasma C3 or C4 or both, and serum CH50 were found in herpes and periodontitis patients. In rhinosinusitis, there was a linear trend with the highest levels found in the order: acute > chronic rhinosinusitis > general population > blood donors with no self-reported history of rhinosinusitis. Complement is involved in the defense against the tested mucosal infections. Seemingly immunocompetent patients with chronic or recurrent mucosal infections frequently have subtle weaknesses in different arms of immunity. Their susceptibility to chronic disease forms may be caused by these. Host s subtly impaired immunity often coincides with effective immune evasion from the same arms of immunity by the disease-causing pathogens. The interpretation of low subclass levels, if no additional predisposing immunologic factors are tested, is difficult and of limited value in early diagnosis and treatment.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.