953 resultados para Segmented polynomials
Resumo:
The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.
Resumo:
This thesis is a collection of five independent but closely related studies. The overall purpose is to approach the analysis of learning outcomes from a perspective that combines three major elements, namely lifelonglifewide learning, human capital, and the benefits of learning. The approach is based on an interdisciplinary perspective of the human capital paradigm. It considers the multiple learning contexts that are responsible for the development of embodied potential – including formal, nonformal and informal learning – and the multiple outcomes – including knowledge, skills, economic, social and others– that result from learning. The studies also seek to examine the extent and relative influence of learning in different contexts on the formation of embodied potential and how in turn that affects economic and social well being. The first study combines the three major elements, lifelonglifewide learning, human capital, and the benefits of learning into one common conceptual framework. This study forms a common basis for the four empirical studies that follow. All four empirical studies use data from the International Adult Literacy Survey (IALS) to investigate the relationships among the major elements of the conceptual framework presented in the first study. Study I. A conceptual framework for the analysis of learning outcomes This study brings together some key concepts and theories that are relevant for the analysis of learning outcomes. Many of the concepts and theories have emerged from varied disciplines including economics, educational psychology, cognitive science and sociology, to name only a few. Accordingly, some of the research questions inherent in the framework relate to different disciplinary perspectives. The primary purpose is to create a common basis for formulating and testing hypotheses as well as to interpret the findings in the empirical studies that follow. In particular, the framework facilitates the process of theorizing and hypothesizing on the relationships and processes concerning lifelong learning as well as their antecedents and consequences. Study II. Determinants of literacy proficiency: A lifelong-lifewide learning perspective This study investigates lifelong and lifewide processes of skill formation. In particular, it seeks to estimate the substitutability and complementarity effects of learning in multiple settings over the lifespan on literacy skill formation. This is done by investigating the predictive capacity of major determinants of literacy proficiency that are associated with a variety of learning contexts including school, home, work, community and leisure. An identical structural model based on previous research is fitted to the IALS data for 18 countries. The results show that even after accounting for all factors, education remains the most important predictor of literacy proficiency. In all countries, however, the total effect of education is significantly mediated through further learning occurring at work, at home and in the community. Therefore, the job and other literacy related factors complement education in predicting literacy proficiency. This result points to a virtual cycle of lifelong learning, particularly to how educational attainment influences other learning behaviours throughout life. In addition, results show that home background as measured by parents’ education is also a strong predictor of literacy proficiency, but in many countries this occurs only if a favourable home background is complemented with some post-secondary education. Study III. The effect of literacy proficiency on earnings: An aggregated occupational approach using the Canadian IALS data This study uses data from the Canadian Adult Literacy Survey to estimate the earnings return to literacy skills. The approach adapts a labour segmented view of the labour market by aggregating occupations into seven types, enabling the estimation of the variable impact of literacy proficiency on earnings, both within and between different types of occupations. This is done using Hierarchical Linear Modeling (HLM). The method used to construct the aggregated occupational classification is based on analysis that considers the role of cognitive and other skills in relation to the nature of occupational tasks. Substantial premiums are found to be associated with some occupational types even after adjusting for within occupational differences in individual characteristics such as schooling, literacy proficiency, labour force experience and gender. Average years of schooling and average levels of literacy proficiency at the between level account for over two-thirds of the premiums. Within occupations, there are significant returns to schooling but they vary depending on the type of occupations. In contrast, the within occupational return of literacy proficiency is not necessarily significant. The latter depends on the type of occupation. Study IV: Determinants of economic and social outcomes from a lifewide learning perspective in Canada In this study the relationship between learning in different contexts, which span the lifewide learning dimension, and individual earnings on the one hand and community participation on the other are examined in separate but comparable models. Data from the Canadian Adult Literacy Survey are used to estimate structural models, which correspond closely to the common conceptual framework outlined in Study I. The findings suggest that the relationship between formal education and economic and social outcomes is complex with confounding effects. The results indicate that learning occurring in different contexts and for different reasons leads to different kinds of benefits. The latter finding suggests a potential trade-off between realizing economic and social benefits through learning that are taken for either job-related or personal-interest related reasons. Study V: The effects of learning on economic and social well being: A comparative analysis Using the same structural model as in Study IV, hypotheses are comparatively examined using the International Adult Literacy Survey data for Canada, Denmark, the Netherlands, Norway, the United Kingdom, and the United States. The main finding from Study IV is confirmed for an additional five countries, namely that the effect of initial schooling on well being is more complex than a direct one and it is significantly mediated by subsequent learning. Additionally, findings suggest that people who devote more time to learning for job-related reasons than learning for personal-interest related reasons experience higher levels of economic well being. Moreover, devoting too much time to learning for personal-interest related reasons has a negative effect on earnings except in Denmark. But the more time people devote to learning for personal-interest related reasons tends to contribute to higher levels of social well being. These results again suggest a trade-off in learning for different reasons and in different contexts.
Resumo:
This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).
Resumo:
[EN]We present a new strategy for constructing tensor product spline spaces over quadtree and octree T-meshes. The proposed technique includes some simple rules for inferring local knot vectors to define spline blending functions. These rules allow to obtain for a given T-mesh a set of cubic spline functions that span a space with nice properties: it can reproduce cubic polynomials, the functions are C2-continuous, linearly independent, and spaces spanned by nested T-meshes are also nested. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0-balanced quadtree or octree. ..
Resumo:
[EN]We present a new strategy for constructing spline spaces over hierarchical T-meshes with quad- and octree subdivision scheme. The proposed technique includes some simple rules for inferring local knot vectors to define C 2 -continuous cubic tensor product spline blending functions. Our conjecture is that these rules allow to obtain, for a given T-mesh, a set of linearly independent spline functions with the property that spaces spanned by nested T-meshes are also nested, and therefore, the functions can reproduce cubic polynomials. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0- balanced mesh...
Resumo:
[EN]We present a new strategy for constructing tensor product spline spaces over quadtree and octree T-meshes. The proposed technique includes some simple rules for inferring local knot vectors to define spline blending functions. These rules allow to obtain for a given T-mesh a set of cubic spline functions that span a space with nice properties: it can reproduce cubic polynomials, the functions are C2-continuous, linearly independent, and spaces spanned by nested T-meshes are also nested. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0-balanced quadtree or octree. ..
Resumo:
The OPERA experiment aims at the direct observation of ν_mu -> ν_tau oscillations in the CNGS (CERN Neutrinos to Gran Sasso) neutrino beam produced at CERN; since the ν_e contamination in the CNGS beam is low, OPERA will also be able to study the sub-dominant oscillation channel ν_mu -> ν_e. OPERA is a large scale hybrid apparatus divided in two supermodules, each equipped with electronic detectors, an iron spectrometer and a highly segmented ~0.7 kton target section made of Emulsion Cloud Chamber (ECC) units. During my research work in the Bologna Lab. I have taken part to the set-up of the automatic scanning microscopes studying and tuning the scanning system performances and efficiencies with emulsions exposed to a test beam at CERN in 2007. Once the triggered bricks were distributed to the collaboration laboratories, my work was centered on the procedure used for the localization and the reconstruction of neutrino events.
Resumo:
The ALICE experiment at the LHC has been designed to cope with the experimental conditions and observables of a Quark Gluon Plasma reaction. One of the main assets of the ALICE experiment with respect to the other LHC experiments is the particle identification. The large Time-Of-Flight (TOF) detector is the main particle identification detector of the ALICE experiment. The overall time resolution, better that 80 ps, allows the particle identification over a large momentum range (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p). The TOF makes use of the Multi-gap Resistive Plate Chamber (MRPC), a detector with high efficiency, fast response and intrinsic time resoltion better than 40 ps. The TOF detector embeds a highly-segmented trigger system that exploits the fast rise time and the relatively low noise of the MRPC strips, in order to identify several event topologies. This work aims to provide detailed description of the TOF trigger system. The results achieved in the 2009 cosmic-ray run at CERN are presented to show the performances and readiness of TOF trigger system. The proposed trigger configuration for the proton-proton and Pb-Pb beams are detailed as well with estimates of the efficiencies and purity samples.
Resumo:
Die Zellgenealogie des Polychaeten Platynereis dumerilii wurde durch Farbstoffinjektion in die Blastomeren des 2-, 4- und 8-Zellstadiums, sowie die Zellen 2d, 2d112, 4d und 4d1 untersucht. Injektionen gelangen durch Aufweichung der Vitellinhülle mittels Dithioerythritol und Trypsin. Die injizierten Keime wurden zur Trochophora bzw zum dreisegmentigen Jungwurm aufgezogen, fixiert und mit dem konfokalen Rasterlichtmikroskop dreidimensional aufgenommen. Die animal-vegetale Achse des Frühkeims entspricht der antero-posterioren Achse des Jungwurms. Die Mikromeren des ersten Quartetts sind radiär um die antero-posteriore Achse angeordnet und bilden den Kopf. Die Mikromere 2d proliferiert bilateralsymmetrisch von der dorsalen Mittellinie aus und liefert das gesamte Rumpfektoderm. Indirekt ließ sich ableiten, daß die Mikromeren 2a1 bis 2c1 schmale ektodermale Streifen zwischen Kopf und Rumpf bilden und aus 2a2 und 2c2 das ektodermale Stomodaeum hervorgeht. Die Mikromeren des dritten Quartetts sowie möglicherweise 2b2 bilden 'Ektomesoderm'. 4d proliferiert ebenfalls bilateralsymmetrisch von der dorsalen Mittellinie aus zum Rumpfmesoderm und liefert vielleicht noch kleine Beiträge zum Aufbau des Darmes. Der Mitteldarm stammt von den dotterreichen Makromeren 4A bis 4D.
Resumo:
In this study new tomographic models of Colombia were calculated. I used the seismicity recorded by the Colombian seismic network during the period 2006-2009. In this time period, the improvement of the seismic network yields more stable hypocentral results with respect to older data set and allows to compute new 3D Vp and Vp/Vs models. The final dataset consists of 10813 P- and 8614 S-arrival times associated to 1405 earthquakes. Tests with synthetic data and resolution analysis indicate that velocity models are well constrained in central, western and southwestern Colombia to a depth of 160 km; the resolution is poor in the northern Colombia and close to Venezuela due to a lack of seismic stations and seismicity. The tomographic models and the relocated seismicity indicate the existence of E-SE subducting Nazca lithosphere beneath central and southern Colombia. The North-South changes in Wadati-Benioff zone, Vp & Vp/Vs pattern and volcanism, show that the downgoing plate is segmented by slab tears E-W directed, suggesting the presence of three sectors. Earthquakes in the northernmost sector represent most of the Colombian seimicity and concentrated on 100-170 km depth interval, beneath the Eastern Cordillera. Here a massive dehydration is inferred, resulting from a delay in the eclogitization of a thickened oceanic crust in a flat-subduction geometry. In this sector a cluster of intermediate-depth seismicity (Bucaramanga Nest) is present beneath the elbow of the Eastern Cordillera, interpreted as the result of massive and highly localized dehydration phenomenon caused by a hyper-hydrous oceanic crust. The central and southern sectors, although different in Vp pattern show, conversely, a continuous, steep and more homogeneous Wadati-Benioff zone with overlying volcanic areas. Here a "normalthickened" oceanic crust is inferred, allowing for a gradual and continuous metamorphic reactions to take place with depth, enabling the fluid migration towards the mantle wedge.
Resumo:
Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.
Resumo:
By using a symbolic method, known in the literature as the classical umbral calculus, a symbolic representation of Lévy processes is given and a new family of time-space harmonic polynomials with respect to such processes, which includes and generalizes the exponential complete Bell polynomials, is introduced. The usefulness of time-space harmonic polynomials with respect to Lévy processes is that it is a martingale the stochastic process obtained by replacing the indeterminate x of the polynomials with a Lévy process, whereas the Lévy process does not necessarily have this property. Therefore to find such polynomials could be particularly meaningful for applications. This new family includes Hermite polynomials, time-space harmonic with respect to Brownian motion, Poisson-Charlier polynomials with respect to Poisson processes, Laguerre and actuarial polynomials with respect to Gamma processes , Meixner polynomials of the first kind with respect to Pascal processes, Euler, Bernoulli, Krawtchuk, and pseudo-Narumi polynomials with respect to suitable random walks. The role played by cumulants is stressed and brought to the light, either in the symbolic representation of Lévy processes and their infinite divisibility property, either in the generalization, via umbral Kailath-Segall formula, of the well-known formulae giving elementary symmetric polynomials in terms of power sum symmetric polynomials. The expression of the family of time-space harmonic polynomials here introduced has some connections with the so-called moment representation of various families of multivariate polynomials. Such moment representation has been studied here for the first time in connection with the time-space harmonic property with respect to suitable symbolic multivariate Lévy processes. In particular, multivariate Hermite polynomials and their properties have been studied in connection with a symbolic version of the multivariate Brownian motion, while multivariate Bernoulli and Euler polynomials are represented as powers of multivariate polynomials which are time-space harmonic with respect to suitable multivariate Lévy processes.
Resumo:
Die chronisch obstruktive Lungenerkrankung (engl. chronic obstructive pulmonary disease, COPD) ist ein Überbegriff für Erkrankungen, die zu Husten, Auswurf und Dyspnoe (Atemnot) in Ruhe oder Belastung führen - zu diesen werden die chronische Bronchitis und das Lungenemphysem gezählt. Das Fortschreiten der COPD ist eng verknüpft mit der Zunahme des Volumens der Wände kleiner Luftwege (Bronchien). Die hochauflösende Computertomographie (CT) gilt bei der Untersuchung der Morphologie der Lunge als Goldstandard (beste und zuverlässigste Methode in der Diagnostik). Möchte man Bronchien, eine in Annäherung tubuläre Struktur, in CT-Bildern vermessen, so stellt die geringe Größe der Bronchien im Vergleich zum Auflösungsvermögen eines klinischen Computertomographen ein großes Problem dar. In dieser Arbeit wird gezeigt wie aus konventionellen Röntgenaufnahmen CT-Bilder berechnet werden, wo die mathematischen und physikalischen Fehlerquellen im Bildentstehungsprozess liegen und wie man ein CT-System mittels Interpretation als lineares verschiebungsinvariantes System (engl. linear shift invariant systems, LSI System) mathematisch greifbar macht. Basierend auf der linearen Systemtheorie werden Möglichkeiten zur Beschreibung des Auflösungsvermögens bildgebender Verfahren hergeleitet. Es wird gezeigt wie man den Tracheobronchialbaum aus einem CT-Datensatz stabil segmentiert und mittels eines topologieerhaltenden 3-dimensionalen Skelettierungsalgorithmus in eine Skelettdarstellung und anschließend in einen kreisfreien Graphen überführt. Basierend auf der linearen System Theorie wird eine neue, vielversprechende, integral-basierte Methodik (IBM) zum Vermessen kleiner Strukturen in CT-Bildern vorgestellt. Zum Validieren der IBM-Resultate wurden verschiedene Messungen an einem Phantom, bestehend aus 10 unterschiedlichen Silikon Schläuchen, durchgeführt. Mit Hilfe der Skelett- und Graphendarstellung ist ein Vermessen des kompletten segmentierten Tracheobronchialbaums im 3-dimensionalen Raum möglich. Für 8 zweifach gescannte Schweine konnte eine gute Reproduzierbarkeit der IBM-Resultate nachgewiesen werden. In einer weiteren, mit IBM durchgeführten Studie konnte gezeigt werden, dass die durchschnittliche prozentuale Bronchialwandstärke in CT-Datensätzen von 16 Rauchern signifikant höher ist, als in Datensätzen von 15 Nichtrauchern. IBM läßt sich möglicherweise auch für Wanddickenbestimmungen bei Problemstellungen aus anderen Arbeitsgebieten benutzen - kann zumindest als Ideengeber dienen. Ein Artikel mit der Beschreibung der entwickelten Methodik und der damit erzielten Studienergebnisse wurde zur Publikation im Journal IEEE Transactions on Medical Imaging angenommen.
Resumo:
The Zero Degree Calorimeter (ZDC) of the ATLAS experiment at CERN is placed in the TAN of the LHC collider, covering the pseudorapidity region higher than 8.3. It is composed by 2 calorimeters, each one longitudinally segmented in 4 modules, located at 140 m from the IP exactly on the beam axis. The ZDC can detect neutral particles during pp collisions and it is a tool for diffractive physics. Here we present results on the forward photon energy distribution obtained using p-p collision data at sqrt{s} = 7 TeV. First the pi0 reconstruction will be used for the detector calibration with photons, then we will show results on the forward photon energy distribution in p-p collisions and the same distribution, but obtained using MC generators. Finally a comparison between data and MC will be shown.
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.