952 resultados para Analytic number theory
Resumo:
This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
Resumo:
Zusammenfassung In der vorliegenden Arbeit besch¨aftige ich mich mit Differentialgleichungen von Feynman– Integralen. Ein Feynman–Integral h¨angt von einem Dimensionsparameter D ab und kann f¨ur ganzzahlige Dimension als projektives Integral dargestellt werden. Dies ist die sogenannte Feynman–Parameter Darstellung. In Abh¨angigkeit der Dimension kann ein solches Integral divergieren. Als Funktion in D erh¨alt man eine meromorphe Funktion auf ganz C. Ein divergentes Integral kann also durch eine Laurent–Reihe ersetzt werden und dessen Koeffizienten r¨ucken in das Zentrum des Interesses. Diese Vorgehensweise wird als dimensionale Regularisierung bezeichnet. Alle Terme einer solchen Laurent–Reihe eines Feynman–Integrals sind Perioden im Sinne von Kontsevich und Zagier. Ich beschreibe eine neue Methode zur Berechnung von Differentialgleichungen von Feynman– Integralen. ¨ Ublicherweise verwendet man hierzu die sogenannten ”integration by parts” (IBP)– Identit¨aten. Die neue Methode verwendet die Theorie der Picard–Fuchs–Differentialgleichungen. Im Falle projektiver oder quasi–projektiver Variet¨aten basiert die Berechnung einer solchen Differentialgleichung auf der sogenannten Griffiths–Dwork–Reduktion. Zun¨achst beschreibe ich die Methode f¨ur feste, ganzzahlige Dimension. Nach geeigneter Verschiebung der Dimension erh¨alt man direkt eine Periode und somit eine Picard–Fuchs–Differentialgleichung. Diese ist inhomogen, da das Integrationsgebiet einen Rand besitzt und daher nur einen relativen Zykel darstellt. Mit Hilfe von dimensionalen Rekurrenzrelationen, die auf Tarasov zur¨uckgehen, kann in einem zweiten Schritt die L¨osung in der urspr¨unglichen Dimension bestimmt werden. Ich beschreibe außerdem eine Methode, die auf der Griffiths–Dwork–Reduktion basiert, um die Differentialgleichung direkt f¨ur beliebige Dimension zu berechnen. Diese Methode ist allgemein g¨ultig und erspart Dimensionswechsel. Ein Erfolg der Methode h¨angt von der M¨oglichkeit ab, große Systeme von linearen Gleichungen zu l¨osen. Ich gebe Beispiele von Integralen von Graphen mit zwei und drei Schleifen. Tarasov gibt eine Basis von Integralen an, die Graphen mit zwei Schleifen und zwei externen Kanten bestimmen. Ich bestimme Differentialgleichungen der Integrale dieser Basis. Als wichtigstes Beispiel berechne ich die Differentialgleichung des sogenannten Sunrise–Graphen mit zwei Schleifen im allgemeinen Fall beliebiger Massen. Diese ist f¨ur spezielle Werte von D eine inhomogene Picard–Fuchs–Gleichung einer Familie elliptischer Kurven. Der Sunrise–Graph ist besonders interessant, weil eine analytische L¨osung erst mit dieser Methode gefunden werden konnte, und weil dies der einfachste Graph ist, dessen Master–Integrale nicht durch Polylogarithmen gegeben sind. Ich gebe außerdem ein Beispiel eines Graphen mit drei Schleifen. Hier taucht die Picard–Fuchs–Gleichung einer Familie von K3–Fl¨achen auf.
Resumo:
Objective: Significant others are central to patients' experience and management of their cancer illness. Building on our validation of the Distress Thermometer (DT) for family members, this investigation examines individual and collective distress in a sample of cancer patients and their matched partners, accounting for the aspects of gender and role. Method: Questionnaires including the DT were completed by a heterogeneous sample of 224 couples taking part in a multisite study. Results: Our investigation showed that male patients (34.2%), female patients (31.9%), and male partners (29.1%) exhibited very similar levels of distress, while female partners (50.5%) exhibited much higher levels of distress according to the DT. At the dyad level just over half the total sample contained at least one individual reporting significant levels of distress. Among dyads with at least one distressed person, the proportion of dyads where both individuals reported distress was greatest (23.6%). Gender and role analyses revealed that males and females were not equally distributed among the four categories of dyads (i.e. dyads with no distress; dyads where solely the patient or dyads where solely the partner is distressed; dyads where both are distressed). Conclusion: A remarkable number of dyads reported distress in one or both partners. Diverse patterns of distress within dyads suggest varying risks of psychosocial strain. Screening patients' partners in addition to patients themselves may enable earlier identification of risk settings. The support offered to either member of such dyads should account for their role- and gender-specific needs. Copyright © 2010 John Wiley ; Sons, Ltd.
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
In this critical analysis of sociological studies of the political subsystem in Yugoslavia since the fall of communism Mr. Ilic examined the work of the majority of leading researchers of politics in the country between 1990 and 1996. Where the question of continuity was important, he also looked at previous research by the writers in question. His aim was to demonstrate the overall extent of existing research and at the same time to identify its limits and the social conditions which defined it. Particular areas examined included the problems of defining basic concepts and selecting the theoretically most relevant indicators; the sources of data including the types of authentic materials exploited; problems of research work (contacts, field control, etc.); problems of analysisl and finally the problems arising from different relations with the people who commission the research. In the first stage of the research, looking at methods of defining key terms, special attention was paid to the analysis of the most frequently used terms such as democracy, totalitarianism, the political left and right, and populism. Numerous weaknesses were noted in the analytic application of these terms. In studies of the possibilities of creating a democratic political system in Serbia and its possible forms (democracy of the majority or consensual democracy), the profound social division of Serbian society was neglected. The left-right distinction tends to be identified with the government-opposition relation, in the way of practical politics. The idea of populism was used to pass responsibility for the policy of war from the manipulator to the manipulated, while the concept of totalitarianism is used in a rather old-fashioned way, with echoes of the cold war. In general, the terminology used in the majority of recent research on the political subsystem in Yugoslavia is characterised by a special ideological style and by practical political material, rather than by developed theoretical effort. The second section of analysis considered the wider theoretical background of the research and focused on studies of the processes of transformation and transition in Yugoslav society, particularly the work of Mladen Lazic and Silvano Bolcic, who he sees as representing the most important and influential contemporary Yugoslav sociologists. Here Mr. Ilic showed that the meaning of empirical data is closely connected with the stratification schemes towards which they are oriented, so that the same data can have different meanings in shown through different schemes. He went on to show the observed theoretical frames in the context of wider ideological understanding of the authors' ideas and research. Here the emphasis was on the formalistic character of such notions as command economy and command work which were used in analysing the functioning and the collapse of communist society, although Mr. Ilic passed favourable judgement on the Lazic's critique of political over-determination in its various attempts to explain the disintegration of the communist political (sub)system. The next stage of the analysis was devoted to the problem of empirical identification of the observed phenomena. Here again the notions of the political left and right were of key importance. He sees two specific problems in using these notion in talking about Yugoslavia, the first being that the process of transition in the FR Yugoslavia has hardly begun. The communist government has in effect remained in power continuously since 1945, despite the introduction of a multi-party system in 1990. The process of privatisation of public property was interrupted at a very early stage and the results of this are evident on the structural level in the continuous weakening of the social status of the middle class and on the political level because the social structure and dominant form of property direct the majority of votes towards to communists in power. This has been combined with strong chauvinist confusion associated with the wars in Croatia and Bosnia, and these ideas were incorporated by all the relevant Yugoslav political parties, making it more difficult to differentiate between them empirically. In this context he quotes the situation of the stream of political scientists who emerged in the Faculty of Political Science in Belgrade. During the time of the one-party regime, this faculty functioned as ideological support for official communist policy and its teachers were unable to develop views which differed from the official line, but rather treated all contrasting ideas in the same way, neglecting their differences. Following the introduction of a multi-party system, these authors changed their idea of a public enemy, but still retained an undifferentiated and theoretically undeveloped approach to the issue of the identification of political ideas. The fourth section of the work looked at problems of explanation in studying the political subsystem and the attempts at an adequate causal explanation of the triumph of Slobodan Milosevic's communists at four subsequent elections was identified as the key methodological problem. The main problem Mr. Ilic isolated here was the neglect of structural factors in explaining the voters' choice. He then went on to look at the way empirical evidence is collected and studied, pointing out many mistakes in planning and determining the samples used in surveys as well as in the scientifically incorrect use of results. He found these weaknesses particularly noticeable in the works of representatives of the so-called nationalistic orientation in Yugoslav sociology of politics, and he pointed out the practical political abuses which these methodological weaknesses made possible. He also identified similar types of mistakes in research by Serbian political parties made on the basis of party documentation and using methods of content analysis. He found various none-sided applications of survey data and looked at attempts to apply other sources of data (statistics, official party documents, various research results). Mr. Ilic concluded that there are two main sets of characteristics in modern Yugoslav sociological studies of political subsystems. There are a considerable number of surveys with ambitious aspirations to explain political phenomena, but at the same time there is a clear lack of a developed sociological theory of political (sub)systems. He feels that, in the absence of such theory, most researcher are over-ready to accept the theoretical solutions found for interpretation of political phenomena in other countries. He sees a need for a stronger methodological bases for future research, either 1) in complementary usage of different sources and ways of collecting data, or 2) in including more of a historical dimension in different attempts to explain the political subsystem in Yugoslavia.
Resumo:
The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
Simulation-based assessment is a popular and frequently necessary approach to evaluation of statistical procedures. Sometimes overlooked is the ability to take advantage of underlying mathematical relations and we focus on this aspect. We show how to take advantage of large-sample theory when conducting a simulation using the analysis of genomic data as a motivating example. The approach uses convergence results to provide an approximation to smaller-sample results, results that are available only by simulation. We consider evaluating and comparing a variety of ranking-based methods for identifying the most highly associated SNPs in a genome-wide association study, derive integral equation representations of the pre-posterior distribution of percentiles produced by three ranking methods, and provide examples comparing performance. These results are of interest in their own right and set the framework for a more extensive set of comparisons.
Resumo:
It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.
Resumo:
The theory on the intensities of 4f-4f transitions introduced by B.R. Judd and G.S. Ofelt in 1962 has become a center piece in rare-earth optical spectroscopy over the past five decades. Many fundamental studies have since explored the physical origins of the Judd–Ofelt theory and have proposed numerous extensions to the original model. A great number of studies have applied the Judd–Ofelt theory to a wide range of rare-earth doped materials, many of them with important applications in solid-state lasers, optical amplifiers, phosphors for displays and solid state lighting, upconversion and quantum-cutting materials, and fluorescent markers. This paper takes the view of the experimentalist who is interested in appreciating the basic concepts, implications, assumptions, and limitations of the Judd–Ofelt theory in order to properly apply it to practical problems. We first present the formalism for calculating the wavefunctions of 4f electronic states in a concise form and then show their application to the calculation and fitting of 4f-4f transition intensities. The potential, limitations and pitfalls of the theory are discussed, and a detailed case study of LaCl3:Er3+ is presented.
Resumo:
We investigate the SU(3)-invariant sector of the one-parameter family of SO(8) gauged maximal supergravities that has been recently discovered. To this end, we construct the N=2 truncation of this theory and analyse its full vacuum structure. The number of critical point is doubled and includes new N=0 and N=1 branches. We numerically exhibit the parameter dependence of the location and cosmological constant of all extrema. Moreover, we provide their analytic expressions for cases of special interest. Finally, while the mass spectra are found to be parameter independent in most cases, we show that the novel non-supersymmetric branch with SU(3) invariance provides the first counterexample to this.