23 resultados para Double point curve

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study concentrates on the contested concept of pastiche in literary studies. It offers the first detailed examination of the history of the concept from its origins in the seventeenth century to the present, showing how pastiche emerged as a critical concept in interaction with the emerging conception of authorial originality and the copyright laws protecting it. One of the key results of this investigation is the contextualisation of the postmodern debate on pastiche. Even though postmodern critics often emphasise the radical novelty of pastiche, they in fact resuscitate older positions and arguments without necessarily reflecting on their historical conditions. This historical background is then used to analyse the distinction between the primarily French conception of pastiche as the imitation of style and the postmodern notion of it as the compilation of different elements. The latter s vagueness and inclusiveness detracts from its value as a critical concept. The study thus concentrates on the notion of stylistic pastiche, challenging the widespread prejudice that it is merely an indication of lack of talent. Because it is multiply based on repetition, pastiche is in fact a highly ambiguous or double-edged practice that calls into question the distinction between repetition and original, thereby undermining the received notion of individual unique authorship as a fundamental aesthetic value. Pastiche does not, however, constitute a radical upheaval of the basic assumptions on which the present institution of literature relies, since, in order to mark its difference, pastiche always refers to a source outside itself against which its difference is measured. Finally, the theoretical analysis of pastiche is applied to literary works. The pastiches written by Marcel Proust demonstrate how it can become an integral part of a writer s poetics: imitation of style is shown to provide Proust with a way of exploring the role of style as a connecting point between inner vision and reality. The pastiches of the Sherlock Holmes stories by Michael Dibdin, Nicholas Meyer and the duo Adrian Conan Doyle and John Dickson Carr illustrate the functions of pastiche within a genre detective fiction that is itself fundamentally repetitive. A.S. Byatt s Possession and D.M. Thomas s Charlotte use Victorian pastiches to investigate the conditions of literary creation in the age of postmodern suspicion of creativity and individuality. The study thus argues that the concept of pastiche has valuable insights to offer to literary criticism and theory, and that literary pastiches, though often dismissed in reviews and criticism, are a particularly interesting object of study precisely because of their characteristic ambiguity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

My PhD-thesis The uneasy borders of desire Magnus Enckell's representations of masculinities and femininities and the question how to create the self concentrates on the works of Finnish fin-de-siècle artist Magnus Enckell (1870-1925). My thesis deals with representations of masculinities, femininities, sexualities and different identity-positions. My research is about questions concerning representational ways of melancholy, androgyny, narcissism, themes of Golden Age and Double in Enckell s ouvre. These themes are analyzed by contextualizing them with different, but intersecting, discourses of varied scientific, artistic and occult ideas in the fin-de-siècle. The main point is analyze how the subject is constructed in both Foucauldian and Freudian sense and what one has to know about oneself. My approaches are based on ideas expressed in different discourses as queer-theory, Michel Foucault s genealogical epistemology and knowledge-power theory, psychoanalysis, art history and visual culture studies. My starting point lays is Foucault s idea expressed in his The History of Sexuality that the constitution of homosexual or as well as heterosexual subject inaugurates possibilities for transgressive activities e.g. by giving own voice to the sexualized subject. My main thesis is to suggest that Enckell s works in their multiple and ambiguous ways construct a phantasmatic position for viewer who may identify oneself to different desires, may construct or deconstruct a sexual identity for oneself or try to define the truth about oneself. Enckell s works should be considered as a contradictory processes which both seduce person to construct an identity and as well as lure person to pursue for the deconstruction of specific and permanent identity by celebrating the ambiguousness and discontinuity in one s identity. I m suggesting that the gazing subject feels pleasure in finding one s identity but the one must face the exposure of the melancholic structure which forms the basis of sexual desire. The subject may try to resolve one s melancholy by creating a phantasy about the original and unisexual being where desires, sexualities, phantasies and identities haven t been diverged. This can be fantasized in terms of art which forms a double for the melancholic subject who is in this limited and imaginary way able to forget for a while one s existential solitude.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives of this study were to determine secular trends of diabetes prevalence in China and develop simple risk assessment algorithms for screening individuals with high-risk for diabetes or with undiagnosed diabetes in Chinese and Indian adults. Two consecutive population based surveys in Chinese and a prospective study in Mauritian Indians were involved in this study. The Chinese surveys were conducted in randomly selected populations aged 20-74 years in 2001-2002 (n=14 592) and 35-74 years in 2006 (n=4416). A two-step screening strategy using fasting capillary plasma glucose (FCG) as first-line screening test followed by standard 2-hour 75g oral glucose tolerance tests (OGTTs) was applied to 12 436 individuals in 2001, while OGTTs were administrated to all participants together with FCG in 2006 and to 2156 subjects in 2002. In Mauritius, two consecutive population based surveys were conducted in Mauritian Indians aged 20-65 years in 1987 and 1992; 3094 Indians (1141 men), who were not diagnosed as diabetes at baseline, were reexamined with OGTTs in 1992 and/or 1998. Diabetes and pre-diabetes was defined following 2006 World Health Organization/ International Diabetes Federation Criteria. Age-standardized, as well as age- and sex-specific, prevalence of diabetes and pre-diabetes in adult Chinese was significantly increased from 12.2% and 15.4% in 2001 to 16.0% and 21.2% in 2006, respectively. A simple Chinese diabetes risk score was developed based on the data of Chinese survey 2001-2002 and validated in the population of survey 2006. The risk scores based on β coefficients derived from the final Logistic regression model ranged from 3 – 32. When the score was applied to the population of survey 2006, the area under operating characteristic curve (AUC) of the score for screening undiagnosed diabetes was 0.67 (95% CI, 0.65-0.70), which was lower than the AUC of FCG (0.76 [0.74-0.79]), but similar to that of HbA1c (0.68 [0.65-0.71]). At a cut-off point of 14, the sensitivity and specificity of the risk score in screening undiagnosed diabetes was 0.84 (0.81-0.88) and 0.40 (0.38-0.41). In Mauritian Indian, body mass index (BMI), waist girth, family history of diabetes (FH), and glucose was confirmed to be independent risk predictors for developing diabetes. Predicted probabilities for developing diabetes derived from a simple Cox regression model fitted with sex, FH, BMI and waist girth ranged from 0.05 to 0.64 in men and 0.03 to 0.49 in women. To predict the onset of diabetes, the AUC of the predicted probabilities was 0.62 (95% CI, 0.56-0.68) in men and 0.64(0.59-0.69) in women. At a cut-off point of 0.12, the sensitivity and specificity was 0.72(0.71-0.74) and 0.47(0.45-0.49) in men; and 0.77(0.75-0.78) and 0.50(0.48-0.52) in women, respectively. In conclusion, there was a rapid increase in prevalence of diabetes in Chinese adults from 2001 to 2006. The simple risk assessment algorithms based on age, obesity and family history of diabetes showed a moderate discrimination of diabetes from non-diabetes, which may be used as first line screening tool for diabetes and pre-diabetes, and for health promotion purpose in Chinese and Indians.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multiplier ideals of an ideal in a regular local ring form a family of ideals parametrized by non-negative rational numbers. As the rational number increases the corresponding multiplier ideal remains unchanged until at some point it gets strictly smaller. A rational number where this kind of diminishing occurs is called a jumping number of the ideal. In this manuscript we shall give an explicit formula for the jumping numbers of a simple complete ideal in a two dimensional regular local ring. In particular, we obtain a formula for the jumping numbers of an analytically irreducible plane curve. We then show that the jumping numbers determine the equisingularity class of the curve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern Christian theology has been at pain with the schism between the Bible and theology, and between biblical studies and systematic theology. Brevard Springs Childs is one of biblical scholars who attempt to dismiss this “iron curtain” separating the two disciplines. The present thesis aims at analyzing Childs’ concept of theological exegesis in the canonical context. In the present study I employ the method of systematic analysis. The thesis consists of seven chapters. Introduction is the first chapter. The second chapter attempts to find out the most important elements which exercise influence on Childs’ methodology of biblical theology by sketching his academic development during his career. The third chapter attempts to deal with the crucial question why and how the concept of the canon is so important for Childs’ methodology of biblical theology. In chapter four I analyze why and how Childs is dissatisfied with historical-critical scholarship and I point out the differences and similarities between his canonical approach and historical criticism. The fifth chapter attempts at discussing Childs’ central concepts of theological exegesis by investigating whether a Christocentric approach is an appropriate way of creating a unified biblical theology. In the sixth chapter I present a critical evaluation and methodological reflection of Childs’ theological exegesis in the canonical context. The final chapter sums up the key points of Childs’ methodology of biblical theology. The basic results of this thesis are as follows: First, the fundamental elements of Childs’ theological thinking are rooted in Reformed theological tradition and in modern theological neo-orthodoxy and in its most prominent theologian, Karl Barth. The American Biblical Theological Movement and the controversy between Protestant liberalism and conservatism in the modern American context cultivate his theological sensitivity and position. Second, Childs attempts to dismiss negative influences of the historical-critical method by establishing canon-based theological exegesis leading into confessional biblical theology. Childs employs terminology such as canonical intentionality, the wholeness of the canon, the canon as the most appropriate context for doing a biblical theology, and the continuity of the two Testaments, in order to put into effect his canonical program. Childs demonstrates forcefully the inadequacies of the historical-critical method in creating biblical theology in biblical hermeneutics, doctrinal theology, and pastoral practice. His canonical approach endeavors to establish and create post-critical Christian biblical theology, and works within the traditional framework of faith seeking understanding. Third, Childs’ biblical theology has a double task: descriptive and constructive, the former connects biblical theology with exegesis, the later with dogmatic theology. He attempts to use a comprehensive model, which combines a thematic investigation of the essential theological contents of the Bible with a systematic analysis of the contents of the Christian faith. Childs also attempts to unite Old Testament theology and New Testament theology into one unified biblical theology. Fourth, some problematic points of Childs’ thinking need to be mentioned. For instance, his emphasis on the final form of the text of the biblical canon is highly controversial, yet Childs firmly believes in it, he even regards it as the corner stone of his biblical theology. The relationship between the canon and the doctrine of biblical inspiration is weak. He does not clearly define whether Scripture is God’s word or whether it only “witnesses” to it. Childs’ concepts of “the word of God” and “divine revelation” remain unclear, and their ontological status is ambiguous. Childs’ theological exegesis in the canonical context is a new attempt in the modern history of Christian theology. It expresses his sincere effort to create a path for doing biblical theology. Certainly, it was just a modest beginning of a long process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In my dissertation I have studied St Teresa (1515-1582) in the light of medieval mystical theories. I have two main levels in my research: historical and theological. On the historical level I study St Teresa s personal history in the context of her family and the Spanish society. On the theological level I study both St Teresa s mysticism and her religious experience in the light of medieval mysticism. St Teresa wrote a book called Life , which is her narrative autobiography and story about her mystical spiritual formation. She reflected herself through biblical texts interpreting them in the course of the biblical hermeneutics like allegory, typology, tropology and anagogy. In addition to that she read others life stories from her period of time, but reflected herself only slightly through the sociological point of view. She used irony as a means to gain acceptance to her authority and motive to write. Her position has been described as a double bind because of writing at the request of educated men and to the non-educated women as she herself was uneducated. She used irony as a means to achieve valuation to women, to gain negative attributes connected to them and to gain authority to teach them mystical spirituality, the Bible and prayer. In this ironic tendency she was a feminist writer. In order to understand medieval mysticism I have written in the first chapter a review of the main trends in medieval mysticism in connection with the classical emotional theories. Two medieval mystical theories show an important role in St Teresa s mysticism. One is love mysticism and the other is the three partite way of mysticism (purification, illumination and union). The classic-philosophical emotional theories play a role in both patterns. The theory of love mysticism St Teresa interpreted in the traditional way stressing the spiritual meaning of love in connexion with God and neighbors. Love is an emotion, which is bound with other emotions, but all objects of love don t strengthen spiritual love. In the three partite way of mysticism purification means to find biblical values in life and to practice meditative self-knowledge theologically interpreted. In illumination human understanding has to be illuminated by God and united to mystical knowledge from God. St Teresa considered illumination a way to learn things. Illumination has also psychological aspects like recognition of many trials and pains, which come from life on earth. Theologically interpreted in illumination one should die to oneself, let oneself be transformed and renewed by God. I have also written a review of the modern philosophical discussion on personal identity where memory and mental experiences are important creators of personal identity. St Teresa bound medieval mystical teaching together with her personal religious experience. Her personal identity is by its character based on her narrative life story where mental experiences play important role. Previous researchers have labelled St Teresa as an ecstatic person whose experiences produced ecstatic phenomena to the mysticism. These phenomena combined with visions have in one respect made of her a person who has brought physical and visionary tendencies to theology. In spite of that she also represents a modern tendency trying to give words to experiences, which at first seem to be exceptional and extreme and which are easily interpreted as one-sided either physical or sexual or unsaid. In other respect I have stressed the personality of St Teresa that was represented as both strong and weak. The strong personality for her is demonstrated by religious faith and in its practice. The weak personality was for her a natural personal identity. St Teresa saw a unifying aspect in almost all. Firstly, her mysticism was aimed towards union with God and secondly, the unifying aspects and common rules in human relations in community life were central. Union with God is based on the fact that in a soul God is living in its centre, where God is present in the Trinitarian way. The picture of God in ourselves is a mirror but to get to know God better is to recognize his/her presence in us. When the soul recognizes itself as a dwelling place of God, it knows itself as God knows him/herself. There is equality between God and the soul. To be a Christian means to participate in God in his Trinitarian being. The participation to God is a process of divinization that puts a person into transformation, change and renewal. The unitive aspect concludes also knowledge of opposites between experience of community and solitude as well as community and separateness. As a founder of monasteries St Teresa practiced theology of poverty. She renewed the monastic life founding a rule called discalced that stressed ascetic tendencies. Supporters of her work were after the difficulties in the beginning both society and churchly leaders. She wrote about the monasteries including in her description at times seriousness at times humor and irony. Her stories are said to be picaresque histories that contain stories of ordinary laymen and many unexpected occasions. She exercised a kind of Bakhtinian dialogue in her letters. St Teresa stressed the virtues like sacrifice, determination and courage in the monastic life. Most of what she taught of virtues is based on biblical spirituality but there are also psychological tendencies in her writings. The theological pedagogical advice is mixed with psychology, but she herself made no distinction between different aspects in her teaching. To understand St Teresa and her mysticism is to recognize that she mixes her personal religious experience and mysticism, which widens mysticism to religious experience in a new way, although this corresponds also the very definition of mysticism. St Teresa concentrated on mental-spiritual experiences and the aim of her mystical teaching was to produce a human mind well cured like a garden that has God as its gardener.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lidocaine is a widely used local anaesthetic agent that also has anti-arrhythmic effects. It is classified as a type Ib anti-arrhythmic agent and is used to treat ventricular tachycardia or ventricular fibrillation. Lidocaine is eliminated mainly by metabolism, and less than 5% is excreted unchanged in urine. Lidocaine is a drug with a medium to high extraction ratio, and its bioavailability is about 30%. Based on in vitro studies, the earlier understanding was that CYP3A4 is the major cytochrome P450 (CYP) enzyme involved in the metabolism of lidocaine. When this work was initiated, there was little human data on the effect of inhibitors of CYP enzymes on the pharmacokinetics of lidocaine. Because lidocaine has a low therapeutic index, medications that significantly inhibit lidocaine clearance (CL) could increase the risk of toxicity. These studies investigated the effects of some clinically important CYP1A2 and CYP3A4 inhibitors on the pharmacokinetics of lidocaine administered by different routes. All of the studies were randomized, double-blind, placebo-controlled cross-over studies in two or three phases in healthy volunteers. Pretreatment with clinically relevant doses of CYP3A4 inhibitors erythromycin and itraconazole or CYP1A2 inhibitors fluvoxamine and ciprofloxacin was followed by a single dose of lidocaine. Blood samples were collected to determine the pharmacokinetic parameters of lidocaine and its main metabolites monoethylglycinexylidide (MEGX) and 3-hydroxylidocaine (3-OH-lidocaine). Itraconazole and erythromycin had virtually no effect on the pharmacokinetics of intravenous lidocaine, but erythromycin slightly prolonged the elimination half-life (t½) of lidocaine (Study I). When lidocaine was taken orally, both erythromycin and itraconazole increased the peak concentration (Cmax) and the area under the concentration-time curve (AUC) of lidocaine by 40-70% (Study II). Compared with placebo and itraconazole, erythromycin increased the Cmax and the AUC of MEGX by 40-70% when lidocaine was given intravenously or orally (Studies I and II). The pharmacokinetics of inhaled lidocaine was unaffected by concomitant administration of itraconazole (Study III). Fluvoxamine reduced the CL of intravenous lidocaine by 41% and prolonged the t½ of lidocaine by 35%. The mean AUC of lidocaine increased 1.7-fold (Study IV). After oral administration of lidocaine, the mean AUC of lidocaine in-creased 3-fold and the Cmax 2.2-fold by fluvoxamine (Study V). During the pretreatment with fluvoxamine combined with erythromycin, the CL of intravenous lidocaine was 53% smaller than during placebo and 21% smaller than during fluvoxamine alone. The t½ of lidocaine was significantly longer during the combination phase than during the placebo or fluvoxamine phase. The mean AUC of intravenous lidocaine increased 2.3-fold and the Cmax 1.4-fold (Study IV). After oral administration of lidocaine, the mean AUC of lidocaine increased 3.6-fold and the Cmax 2.5-fold by concomitant fluvoxamine and erythromycin. The t½ of oral lidocaine was significantly longer during the combination phase than during the placebo (Study V). When lidocaine was given intravenously, the combination of fluvoxamine and erythromycin prolonged the t½ of MEGX by 59% (Study IV). Compared with placebo, ciprofloxacin increased the mean Cmax and AUC of intravenous lidocaine by 12% and 26%, respectively. The mean plasma CL of lidocaine was reduced by 22% and its t½ prolonged by 7% (Study VI). These studies clarify the principal role of CYP1A2 and suggest only a modest role of CYP3A4 in the elimination of lidocaine in vivo. The inhibition of CYP1A2 by fluvoxamine considerably reduces the elimination of lidocaine. Concomitant use of fluvoxamine and the CYP3A4 inhibitor erythromycin further increases lidocaine concentrations. The clinical implication of this work is that clinicians should be aware of the potentially increased toxicity of lidocaine when used together with inhibitors of CYP1A2 and particularly with the combination of drugs inhibiting both CYP1A2 and CYP3A4 enzymes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sormen koukistajajännevamman korjauksen jälkeisen aktiivisen mobilisaation on todettu johtavan parempaan toiminnalliseen lopputulokseen kuin nykyisin yleisesti käytetyn dynaamisen mobilisaation. Aktiivisen mobilisaation ongelma on jännekorjauksen pettämisriskin lisääntyminen nykyisten ommeltekniikoiden riittämättömän vahvuuden vuoksi. Jännekorjauksen lujuutta on parannettu kehittämällä monisäieommeltekniikoita, joissa jänteeseen tehdään useita rinnakkaisia ydinompeleita. Niiden kliinistä käyttöä rajoittaa kuitenkin monimutkainen ja aikaa vievä tekninen suoritus. Käden koukistajajännekorjauksessa käytetään yleisesti sulamattomia ommelmateriaaleja. Nykyiset käytössä olevat biohajoavat langat heikkenevät liian nopeasti jänteen paranemiseen nähden. Biohajoavan laktidistereokopolymeeri (PLDLA) 96/4 – langan vetolujuuden puoliintumisajan sekä kudosominaisuuksien on aiemmin todettu soveltuvan koukistajajännekorjaukseen. Tutkimuksen tavoitteena oli kehittää välittömän aktiivisen mobilisaation kestävä ja toteutukseltaan yksinkertainen käden koukistajajännekorjausmenetelmä biohajoavaa PLDLA 96/4 –materiaalia käyttäen. Tutkimuksessa analysoitiin viiden eri yleisesti käytetyn koukistajajänneompeleen biomekaanisia ominaisuuksia staattisessa vetolujuustestauksessa ydinompeleen rakenteellisten ominaisuuksien – 1) säikeiden (lankojen) lukumäärän, 2) langan paksuuden ja 3) ompeleen konfiguraation – vaikutuksen selvittämiseksi jännekorjauksen pettämiseen ja vahvuuteen. Jännekorjausten näkyvän avautumisen todettiin alkavan perifeerisen ompeleen pettäessä voima-venymäkäyrän myötöpisteessä. Ydinompeleen lankojen lukumäärän lisääminen paransi ompeleen pitokykyä jänteessä ja suurensi korjauksen myötövoimaa. Sen sijaan paksumman (vahvemman) langan käyttäminen tai ompeleen konfiguraatio eivät vaikuttaneet myötövoimaan. Tulosten perusteella tutkittiin mahdollisuuksia lisätä ompeleen pitokykyä jänteestä yksinkertaisella monisäieompeleella, jossa ydinommel tehtiin kolmen säikeen polyesterilangalla tai nauhamaisen rakenteen omaavalla kolmen säikeen polyesterilangalla. Nauhamainen rakenne lisäsi merkitsevästi ompeleen pitokykyä jänteessä parantaen myötövoimaa sekä maksimivoimaa. Korjauksen vahvuus ylitti aktiivisen mobilisaation jännekorjaukseen kohdistaman kuormitustason. PLDLA 96/4 –langan soveltuvuutta koukistajajännekorjaukseen selvitettiin tutkimalla langan biomekaanisia ominaisuuksia ja solmujen pito-ominaisuuksia staattisessa vetolujuustestauksessa verrattuna yleisimmin jännekorjauksessa käytettävään punottuun polyesterilankaan (Ticron®). PLDLA –langan todettiin soveltuvan hyvin koukistajajännekorjaukseen, sillä se on polyesterilankaa venymättömämpi ja solmujen pitävyys on parempi. Viimeisessä vaiheessa tutkittiin PLDLA 96/4 –langasta valmistetulla kolmisäikeisellä, nauhamaisella jännekorjausvälineellä tehdyn jännekorjauksen kestävyyttä staattisessa vetolujuustestauksessa sekä syklisessä kuormituksessa, joka simuloi staattista testausta paremmin mobilisaation toistuvaa kuormitusta. PLDLA-korjauksen vahvuus ylitti sekä staattisessa että syklisessä kuormituksessa aktiivisen mobilisaation edellyttämän vahvuuden. Nauhamaista litteää ommelmateriaalia ei aiemmin ole tutkittu tai käytetty käden koukistajajännekorjauksessa. Tässä tutkimuksessa ommelmateriaalin nauhamainen rakenne paransi merkitsevästi jännekorjauksen vahvuutta, minkä arvioidaan johtuvan lisääntyneestä kontaktipinnasta jänteen ja ommelmateriaalin välillä estäen ompeleen läpileikkautumista jänteessä. Tutkimuksessa biohajoavasta PLDLA –materiaalista valmistetulla rakenteeltaan nauhamaisella kolmisäikeisellä langalla tehdyn jännekorjauksen vahvuus saavutti aktiivisen mobilisaation edellyttämän tason. Lisäksi uusi menetelmä on helppokäyttöinen ja sillä vältetään perinteisten monisäieompeleiden tekniseen suoritukseen liittyvät ongelmat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.