17 resultados para Vakar formula

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

B. cereus is a gram-positive bacterium that possesses two different forms of life:the large, rod-shaped cells (ca. 0.002 mm by 0.004 mm) that are able to propagate and the small (0.001 mm), oval shaped spores. The spores can survive in almost any environment for up to centuries without nourishment or water. They are insensitive towards most agents that normally kill bacteria: heating up to several hours at 90 ºC, radiation, disinfectants and extreme alkaline (≥ pH 13) and acid (≤ pH 1) environment. The spores are highly hydrophobic and therefore make them tend to stick to all kinds of surfaces, steel, plastics and live cells. In favorable conditions the spores of B. cereus may germinate into vegetative cells capable of producing food poisoning toxins. The toxins can be heat-labile protein formed after ingestion of the contaminated food, inside the gastrointestinal tract (diarrhoeal toxins), or heat stable peptides formed in the food (emesis causing toxin, cereulide). Cereulide cannot be inactivated in foods by cooking or any other procedure applicable on food. Cereulide in consumed food causes serious illness in human, even fatalities. In this thesis, B. cereus strains originating from different kinds of foods and environments and 8 different countries were inspected for their capability of forming cereulide. Of the 1041 isolates from soil, animal feed, water, air, used bedding, grass, dung and equipment only 1.2 % were capable of producing cereulide, whereas of the 144 isolates originating from foods 24 % were cereulide producers. Cereulide was detected by two methods: by its toxicity towards mammalian cells (sperm assay) and by its peculiar chemical structure using liquid-chromatograph-mass spectrometry equipment. B. cereus is known as one of the most frequent bacteria occurring in food. Most foods contain more than one kind of B. cereus. When randomly selected 100 isolates of B. cereus from commercial infant foods (dry formulas) were tested, 11% of these produced cereulide. Considering a frequent content of 103 to 104 cfu (colony forming units) of B. cereus per gram of infant food formula (dry), it appears likely that most servings (200 ml, 30 g of the powder reconstituted with water) may contain cereulide producers. When a reconstituted infant formula was inoculated with >105 cfu of cereulide producing B. cereus per ml and left at room temperature, cereulide accumulated to food poisoning levels (> 0.1 mg of cereulide per serving) within 24 hours. Paradoxically, the amount of cereulide (per g of food) increased 10 to 50 fold when the food was diluted 4 - 15 fold with water. The amount of the produced cereulide strongly depended on the composition of the formula: most toxin was formed in formulas with cereals mixed with milk, and least toxin in formulas based on milk only. In spite of the aggressive cleaning practices executed by the modern dairy industry, certain genotypes of B. cereus appear to colonise the silos tanks. In this thesis four strategies to explain their survival of their spores in dairy silos were identified. First, high survival (log 15 min kill ≤ 1.5) in the hot alkaline (pH >13) wash liquid, used at the dairies for cleaning-in-place. Second, efficient adherence of the spores to stainless steel from cold water. Third, a cereulide producing group with spores characterized by slow germination in rich medium and well preserved viability when exposed to heating at 90 ºC. Fourth, spores capable of germinating at 8 ºC and possessing the psychrotolerance gene, cspA. There were indications that spores highly resistant to hot 1% sodium hydroxide may be effectively inactivated by hot 0.9% nitric acid. Eight out of the 14 dairy silo tank isolates possessing hot alkali resistant spores were capable of germinating and forming biofilm in whole milk, not previously reported for B. cereus. In this thesis it was shown that cereulide producing B. cereus was capable of inhibiting the growth of cereulide non-producing B. cereus occurring in the same food. This phenomenon, called antagonism, has long been known to exist between B. cereus and other microbial species, e.g. various species of Bacillus, gram-negative bacteria and plant pathogenic fungi. In this thesis intra-species antagonism of B. cereus was shown for the first time. This brother-killing did not depend on the cereulide molecule, also some of the cereulide non-producers were potent antagonists. Interestingly, the antagonistic clades were most frequently found in isolates from food implicated with human illness. The antagonistic property was therefore proposed in this thesis as a novel virulence factor that increases the human morbidity of the species B. cereus, in particular of the cereulide producers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Noble gases are mostly known as inert monatomic gases due to their limited reactivity with other elements. However, the first predictions of noble-gas compounds were suggested by Kossel in 1916, by von Antropoff in 1924, and by Pauling in 1930. It took many decades until the first noble-gas compound, XePtF6, was synthesized by Neil Bartlett in 1962. This was followed by gradual development of the field and many noble-gas compounds have been prepared. In 1995, a family of noble-gas hydride molecules was discovered at the University of Helsinki. These molecules have the general formula of HNgY, where H is a hydrogen atom, Ng is a noble-gas atom (Ar, Kr, or Xe), and Y is an electronegative fragment. The first molecular species made include HXeI, HXeBr, HXeCl, HKrCl and HXeH. Nowadays the total number of prepared HNgY molecules is 23 including both inorganic and organic compounds. The first and only neutral ground-state argon compound, HArF, was synthetized in 2000. Helium and neon are the only elements in the periodic table that do not form neutral, ground-state molecules. In this Thesis, experimental preparation of eight novel xenon- and krypton-containing organo-noble-gas hydrides made from acetylene (HCCH), diacetylene (HCCCCH) and cyanoacetylene (HCCCN) are presented. These novel species include the first organic krypton compound, HKrCCH, as well as the first noble-gas hydride molecule containing two Xe atoms, HXeCCXeH. Other new compounds are HXeCCH, HXeCC, HXeC4H, HKrC4H, HXeC3N, and HKrC3N. These molecules are prepared in noble-gas matrices (krypton or xenon) using ultraviolet photolysis of the precursor molecule and thermal mobilization of the photogenerated H atoms. The molecules were identified using infrared spectroscopy and ab initio calculations. The formation mechanisms of the organo-noble-gas molecules are studied and discussed in this context. The focus is to evidence experimentally the neutral formation mechanisms of HNgY molecules upon global mobility of H atoms. The formation of HXeCCXeH from another noble-gas compound (HXeCC) is demonstrated and discussed. Interactions with the surrounding matrix and molecular complexes of the HXeCCH molecule are studied. HXeCCH was prepared in argon and krypton solids in addition to a Xe matrix. The weak HXeCCH∙∙∙CO2 complex is prepared and identified. Preparation of the HXeCCH∙∙∙CO2 complex demonstrates an advanced approach to studies of HNgY complexes where the precursor complex (HCCH∙∙∙CO2) is obtained using photolysis of a larger molecule (propiolic acid).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies the intermolecular interactions in (i) boron-nitrogen based systems for hydrogen splitting and storage, (ii) endohedral complexes, A@C60, and (iii) aurophilic dimers. We first present an introduction of intermolecular interactions. The theoretical background is then described. The research results are summarized in the following sections. In the boron-nitrogen systems, the electrostatic interaction is found to be the leading contribution, as 'Coulomb Pays for Heitler and London' (CHL). For the endohedral complex, the intermolecular interaction is formulated by a one-center expansion of the Coulomb operator 1/rab. For the aurophilic attraction between two C2v monomers, a London-type formula was derived by fully accounting for the anisotropy and point-group symmetry of the monomers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of intermolecular interactions to chemistry, physics, and biology is difficult to overestimate. Without intermolecular forces, condensed phase matter could not form. The simplest way to categorize different types of intermolecular interactions is to describe them using van der Waals and hydrogen bonded (H-bonded) interactions. In the H-bond, the intermolecular interaction appears between a positively charged hydrogen atom and electronegative fragments and it originates from strong electrostatic interactions. H-bonding is important when considering the properties of condensed phase water and in many biological systems including the structure of DNA and proteins. Vibrational spectroscopy is a useful tool for studying complexes and the solvation of molecules. Vibrational frequency shift has been used to characterize complex formation. In an H-bonded system A∙∙∙H-X (A and X are acceptor and donor species, respectively), the vibrational frequency of the H-X stretching vibration usually decreases from its value in free H-X (red-shift). This frequency shift has been used as evidence for H-bond formation and the magnitude of the shift has been used as an indicator of the H-bonding strength. In contrast to this normal behavior are the blue-shifting H-bonds, in which the H-X vibrational frequency increases upon complex formation. In the last decade, there has been active discussion regarding these blue-shifting H-bonds. Noble-gases have been considered inert due to their limited reactivity with other elements. In the early 1930 s, Pauling predicted the stable noble-gas compounds XeF6 and KrF6. It was not until three decades later Neil Bartlett synthesized the first noble-gas compound, XePtF6, in 1962. A renaissance of noble-gas chemistry began in 1995 with the discovery of noble-gas hydride molecules at the University of Helsinki. The first hydrides were HXeCl, HXeBr, HXeI, HKrCl, and HXeH. These molecules have the general formula of HNgY, where H is a hydrogen atom, Ng is a noble-gas atom (Ar, Kr, or Xe), and Y is an electronegative fragment. At present, this class of molecules comprises 23 members including both inorganic and organic compounds. The first and only argon-containing neutral chemical compound HArF was synthesized in 2000 and its properties have since been investigated in a number of studies. A helium-containing chemical compound, HHeF, was predicted computationally, but its lifetime has been predicted to be severely limited by hydrogen tunneling. Helium and neon are the only elements in the periodic table that do not form neutral, ground state molecules. A noble-gas matrix is a useful medium in which to study unstable and reactive species including ions. A solvated proton forms a centrosymmetric NgHNg+ (Ng = Ar, Kr, and Xe) structure in a noble-gas matrix and this is probably the simplest example of a solvated proton. Interestingly, the hypothetical NeHNe+ cation is isoelectronic with the water-solvated proton H5O2+ (Zundel-ion). In addition to the NgHNg+ cations, the isoelectronic YHY- (Y = halogen atom or pseudohalogen fragment) anions have been studied with the matrix-isolation technique. These species have been known to exist in alkali metal salts (YHY)-M+ (M = alkali metal e.g. K or Na) for more than 80 years. Hydrated HF forms the FHF- structure in aqueous solutions, and these ions participate in several important chemical processes. In this thesis, studies of the intermolecular interactions of HNgY molecules and centrosymmetric ions with various species are presented. The HNgY complexes show unusual spectral features, e.g. large blue-shifts of the H-Ng stretching vibration upon complexation. It is suggested that the blue-shift is a normal effect for these molecules, and that originates from the enhanced (HNg)+Y- ion-pair character upon complexation. It is also found that the HNgY molecules are energetically stabilized in the complexed form, and this effect is computationally demonstrated for the HHeF molecule. The NgHNg+ and YHY- ions also show blue-shifts in their asymmetric stretching vibration upon complexation with nitrogen. Additionally, the matrix site structure and hindered rotation (libration) of the HNgY molecules were studied. The librational motion is a much-discussed solid state phenomenon, and the HNgY molecules embedded in noble-gas matrices are good model systems to study this effect. The formation mechanisms of the HNgY molecules and the decay mechanism of NgHNg+ cations are discussed. A new electron tunneling model for the decay of NgHNg+ absorptions in noble-gas matrices is proposed. Studies of the NgHNg+∙∙∙N2 complexes support this electron tunneling mechanism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is a study of a rather new logic called dependence logic and its closure under classical negation, team logic. In this thesis, dependence logic is investigated from several aspects. Some rules are presented for quantifier swapping in dependence logic and team logic. Such rules are among the basic tools one must be familiar with in order to gain the required intuition for using the logic for practical purposes. The thesis compares Ehrenfeucht-Fraïssé (EF) games of first order logic and dependence logic and defines a third EF game that characterises a mixed case where first order formulas are measured in the formula rank of dependence logic. The thesis contains detailed proofs of several translations between dependence logic, team logic, second order logic and its existential fragment. Translations are useful for showing relationships between the expressive powers of logics. Also, by inspecting the form of the translated formulas, one can see how an aspect of one logic can be expressed in the other logic. The thesis makes preliminary investigations into proof theory of dependence logic. Attempts focus on finding a complete proof system for a modest yet nontrivial fragment of dependence logic. A key problem is identified and addressed in adapting a known proof system of classical propositional logic to become a proof system for the fragment, namely that the rule of contraction is needed but is unsound in its unrestricted form. A proof system is suggested for the fragment and its completeness conjectured. Finally, the thesis investigates the very foundation of dependence logic. An alternative semantics called 1-semantics is suggested for the syntax of dependence logic. There are several key differences between 1-semantics and other semantics of dependence logic. 1-semantics is derived from first order semantics by a natural type shift. Therefore 1-semantics reflects an established semantics in a coherent manner. Negation in 1-semantics is a semantic operation and satisfies the law of excluded middle. A translation is provided from unrestricted formulas of existential second order logic into 1-semantics. Also game theoretic semantics are considerd in the light of 1-semantics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multiplier ideals of an ideal in a regular local ring form a family of ideals parametrized by non-negative rational numbers. As the rational number increases the corresponding multiplier ideal remains unchanged until at some point it gets strictly smaller. A rational number where this kind of diminishing occurs is called a jumping number of the ideal. In this manuscript we shall give an explicit formula for the jumping numbers of a simple complete ideal in a two dimensional regular local ring. In particular, we obtain a formula for the jumping numbers of an analytically irreducible plane curve. We then show that the jumping numbers determine the equisingularity class of the curve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: To gain insight on the immunological processes behind cow’s milk allergy (CMA) and the development of oral tolerance. To furthermore investigate the associations of HLA II and filaggrin genotypes with humoral responses to early oral antigens. Methods: The study population was from a cohort of 6209 healthy, full-term infants who in a double-blind randomized trial received supplementary feeding at maternity hospitals (mean duration 4 days): cow’s milk (CM) formula, extensively hydrolyzed whey formula or donor breast milk. Infants who developed CM associated symptoms that subsided during elimination diet (n=223) underwent an open oral CM challenge (at mean age 7 months). The challenge was negative in 112, and in 111 it confirmed CMA, which was IgE-mediated in 83. Patients with CMA were followed until recovery, and 94 of them participated in a follow-up study at age 8-9 years. We investigated serum samples at diagnosis (mean age 7 months, n=111), one year later (19 months, n=101) and at follow-up (8.6 years, n=85). At follow-up, also 76 children randomly selected from the original cohort and without CM associated symptoms were included. We measured CM specific IgE levels with UniCAP (Phadia, Uppsala, Sweden), and β-lactoglobulin, α-casein and ovalbumin specific IgA, IgG1, IgG4 and IgG levels with enzyme-linked immunosorbent assay in sera. We applied a microarray based immunoassay to measure the binding of IgE, IgG4 and IgA serum antibodies to sequential epitopes derived from five major CM proteins at the three time points in 11 patients with active IgE-mediated CMA at age 8-9 years and in 12 patients who had recovered from IgE-mediated CMA by age 3 years. We used bioinformatic methods to analyze the microarray data. We studied T cell expression profile in peripheral blood mononuclear cell (PBMC) samples from 57 children aged 5-12 years (median 8.3): 16 with active CMA, 20 who had recovered from CMA by age 3 years, 21 non-atopic control subjects. Following in vitro β-lactoglobulin stimulation, we measured the mRNA expression in PBMCs of 12 T-cell markers (T-bet, GATA-3, IFN-γ, CTLA4, IL-10, IL-16, TGF-β, FOXP3, Nfat-C2, TIM3, TIM4, STIM-1) with quantitative real time polymerase chain reaction, and the protein expression of CD4, CD25, CD127, FoxP3 with flow cytometry. To optimally distinguish the three study groups, we performed artificial neural networks with exhaustive search for all marker combinations. For genetic associations with specific humoral responses, we analyzed 14 HLA class II haplotypes, the PTPN22 1858 SNP (R620W allele) and 5 known filaggrin null mutations from blood samples of 87 patients with CMA and 76 control subjects (age 8.0-9.3 years). Results: High IgG and IgG4 levels to β-lactoglobulin and α-casein were associated with the HLA (DR15)-DQB1*0602 haplotype in patients with CMA, but not in control subjects. Conversely, (DR1/10)-DQB1*0501 was associated with lower IgG and IgG4 levels to these CM antigens, and to ovalbumin, most significantly among control subjects. Infants with IgE-mediated CMA had lower β -lactoglobulin and α-casein specific IgG1, IgG4 and IgG levels (p<0.05) at diagnosis than infants with non-IgE-mediated CMA or control subjects. When CMA persisted beyond age 8 years, CM specific IgE levels were higher at all three time points investigated and IgE epitope binding pattern remained stable (p<0.001) compared with recovery from CMA by age 3 years. Patients with persisting CMA at 8-9 years had lower serum IgA levels to β-lactoglobulin at diagnosis (p=0.01), and lower IgG4 levels to β-lactoglobulin (p=0.04) and α-casein (p=0.05) at follow-up compared with patients who recovered by age 3 years. In early recovery, signal of IgG4 epitope binding increased while that of IgE decreased over time, and binding patterns of IgE and IgG4 overlapped. In T cell expression profile in response to β –lactoglobulin, the combination of markers FoxP3, Nfat-C2, IL-16, GATA-3 distinguished patients with persisting CMA most accurately from patients who had become tolerant and from non-atopic subjects. FoxP3 expression at both RNA and protein level was higher in children with CMA compared with non-atopic children. Conclusions: Genetic factors (the HLA II genotype) are associated with humoral responses to early food allergens. High CM specific IgE levels predict persistence of CMA. Development of tolerance is associated with higher specific IgA and IgG4 levels and lower specific IgE levels, with decreased CM epitope binding by IgE and concurrent increase in corresponding epitope binding by IgG4. Both Th2 and Treg pathways are activated upon CM antigen stimulation in patients with CMA. In the clinical management of CMA, HLA II or filaggrin genotyping are not applicable, whereas the measurement of CM specific antibodies may assist in estimating the prognosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Antarctic system comprises of the continent itself, Antarctica, and the ocean surrounding it, the Southern Ocean. The system has an important part in the global climate due to its size, its high latitude location and the negative radiation balance of its large ice sheets. Antarctica has also been in focus for several decades due to increased ultraviolet (UV) levels caused by stratospheric ozone depletion, and the disintegration of its ice shelves. In this study, measurements were made during three Austral summers to study the optical properties of the Antarctic system and to produce radiation information for additional modeling studies. These are related to specific phenomena found in the system. During the summer of 1997-1998, measurements of beam absorption and beam attenuation coefficients, and downwelling and upwelling irradiance were made in the Southern Ocean along a S-N transect at 6°E. The attenuation of photosynthetically active radiation (PAR) was calculated and used together with hydrographic measurements to judge whether the phytoplankton in the investigated areas of the Southern Ocean are light limited. By using the Kirk formula the diffuse attenuation coefficient was linked to the absorption and scattering coefficients. The diffuse attenuation coefficients (Kpar) for PAR were found to vary between 0.03 and 0.09 1/m. Using the values for KPAR and the definition of the Sverdrup critical depth, the studied Southern Ocean plankton systems were found not to be light limited. Variabilities in the spectral and total albedo of snow were studied in the Queen Maud Land region of Antarctica during the summers of 1999-2000 and 2000-2001. The measurement areas were the vicinity of the South African Antarctic research station SANAE 4, and a traverse near the Finnish Antarctic research station Aboa. The midday mean total albedos for snow were between 0.83, for clear skies, and 0.86, for overcast skies, at Aboa and between 0.81 and 0.83 for SANAE 4. The mean spectral albedo levels at Aboa and SANAE 4 were very close to each other. The variations in the spectral albedos were due more to differences in ambient conditions than variations in snow properties. A Monte-Carlo model was developed to study the spectral albedo and to develop a novel nondestructive method to measure the diffuse attenuation coefficient of snow. The method was based on the decay of upwelling radiation moving horizontally away from a source of downwelling light. This was assumed to have a relation to the diffuse attenuation coefficient. In the model, the attenuation coefficient obtained from the upwelling irradiance was higher than that obtained using vertical profiles of downwelling irradiance. The model results were compared to field measurements made on dry snow in Finnish Lapland and they correlated reasonably well. Low-elevation (below 1000 m) blue-ice areas may experience substantial melt-freeze cycles due to absorbed solar radiation and the small heat conductivity in the ice. A two-dimensional (x-z) model has been developed to simulate the formation and water circulation in the subsurface ponds. The model results show that for a physically reasonable parameter set the formation of liquid water within the ice can be reproduced. The results however are sensitive to the chosen parameter values, and their exact values are not well known. Vertical convection and a weak overturning circulation is generated stratifying the fluid and transporting warmer water downward, thereby causing additional melting at the base of the pond. In a 50-year integration, a global warming scenario mimicked by a decadal scale increase of 3 degrees per 100 years in air temperature, leads to a general increase in subsurface water volume. The ice did not disintegrate due to the air temperature increase after the 50 year integration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines the intraday and weekend volatility on the German DAX. The intraday volatility is partitioned into smaller intervals and compared to a whole day’s volatility. The estimated intraday variance is U-shaped and the weekend variance is estimated to 19 % of a normal trading day. The patterns in the intraday and weekend volatility are used to develop an extension to the Black and Scholes formula to form a new time basis. Calendar or trading days are commonly used for measuring time in option pricing. The Continuous Time using Discrete Approximations model (CTDA) developed in this study uses a measure of time with smaller intervals, approaching continuous time. The model presented accounts for the lapse of time during trading only. Arbitrage pricing suggests that the option price equals the expected cost of hedging volatility during the option’s remaining life. In this model, time is allowed to lapse as volatility occurs on an intraday basis. The measure of time is modified in CTDA to correct for the non-constant volatility and to account for the patterns in volatility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern elementary particle physics is based on quantum field theories. Currently, our understanding is that, on the one hand, the smallest structures of matter and, on the other hand, the composition of the universe are based on quantum field theories which present the observable phenomena by describing particles as vibrations of the fields. The Standard Model of particle physics is a quantum field theory describing the electromagnetic, weak, and strong interactions in terms of a gauge field theory. However, it is believed that the Standard Model describes physics properly only up to a certain energy scale. This scale cannot be much larger than the so-called electroweak scale, i.e., the masses of the gauge fields W^+- and Z^0. Beyond this scale, the Standard Model has to be modified. In this dissertation, supersymmetric theories are used to tackle the problems of the Standard Model. For example, the quadratic divergences, which plague the Higgs boson mass in the Standard model, cancel in supersymmetric theories. Experimental facts concerning the neutrino sector indicate that the lepton number is violated in Nature. On the other hand, the lepton number violating Majorana neutrino masses can induce sneutrino-antisneutrino oscillations in any supersymmetric model. In this dissertation, I present some viable signals for detecting the sneutrino-antisneutrino oscillation at colliders. At the e-gamma collider (at the International Linear Collider), the numbers of the electron-sneutrino-antisneutrino oscillation signal events are quite high, and the backgrounds are quite small. A similar study for the LHC shows that, even though there are several backrounds, the sneutrino-antisneutrino oscillations can be detected. A useful asymmetry observable is introduced and studied. Usually, the oscillation probability formula where the sneutrinos are produced at rest is used. However, here, we study a general oscillation probability. The Lorentz factor and the distance at which the measurement is made inside the detector can have effects, especially when the sneutrino decay width is very small. These effects are demonstrated for a certain scenario at the LHC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis treats Githa Hariharan s first three novels The Thousand Faces of Night (1992), The Ghosts of Vasu Master (1994) and When Dreams Travel (1999) as a trilogy, in which Hariharan studies the effects of storytelling from different perspectives. The thesis examines the relationship between embedded storytelling and the primary narrative level, the impact of character-bound storytelling on personal development and interpersonal relationships. Thus, I assume that an analysis of the instabilities between characters, and tensions between sets of values introduced through storytelling displays the development of the central characters in the novels. My focus is on the tensions between different sets of values and knowledge systems and their impact on the gender negotiations. The tensions are articulated through a resistance and/or adherence to a cultural narrative, that is, a formula, which underlies specific narratives. Conveyed or disputed by embedded storytelling, the cultural narrative circumscribes what it means to be gendered in Hariharan s novels. The analysis centres on how the narratee in The Thousand Faces of Night and the storyteller in The Ghosts of Vasu Master relate to and are affected by the storytelling, and how they perceive their gendered positions. The analysis of the third novel When Dreams Travel focuses on storytelling, and its relationship to power and representation. I argue that Hariharan's use of embedded storytelling is a way to renegotiate and even reconceptualise gender. Hariharan s primary concern in all three novels is the tensions between tradition or repetition, and change, and how they affect gender. Although the novels feature ancient mythical heroes, mice and flies, or are set in a fantasy world of jinnis and ghouls, they are, I argue, deeply concerned with contemporary issues. Whereas the first novel questions the demands set by family and society on the individual, the second strives to articulate an ethical relationship between the self and the other. The third novel examines the relationship between representation and power. These issues could not be more contemporary, and they loom large over both the regional and global arenas.