31 resultados para Geometric morphometry
Resumo:
This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The tackling of coastal eutrophication requires water protection measures based on status assessments of water quality. The main purpose of this thesis was to evaluate whether it is possible both scientifically and within the terms of the European Union Water Framework Directive (WFD) to assess the status of coastal marine waters reliably by using phytoplankton biomass (ww) and chlorophyll a (Chl) as indicators of eutrophication in Finnish coastal waters. Empirical approaches were used to study whether the criteria, established for determining an indicator, are fulfilled. The first criterion (i) was that an indicator should respond to anthropogenic stresses in a predictable manner and has low variability in its response. Summertime Chl could be predicted accurately by nutrient concentrations, but not from the external annual loads alone, because of the rapid affect of primary production and sedimentation close to the loading sources in summer. The most accurate predictions were achieved in the Archipelago Sea, where total phosphorus (TP) and total nitrogen (TN) alone accounted for 87% and 78% of the variation in Chl, respectively. In river estuaries, the TP mass-balance regression model predicted Chl most accurately when nutrients originated from point-sources, whereas land-use regression models were most accurate in cases when nutrients originated mainly from diffuse sources. The inclusion of morphometry (e.g. mean depth) into nutrient models improved accuracy of the predictions. The second criterion (ii) was associated with the WFD. It requires that an indicator should have type-specific reference conditions, which are defined as "conditions where the values of the biological quality elements are at high ecological status". In establishing reference conditions, the empirical approach could only be used in the outer coastal water types, where historical observations of Secchi depth of the early 1900s are available. The most accurate prediction was achieved in the Quark. In the inner coastal water types, reference Chl, estimated from present monitoring data, are imprecise - not only because of the less accurate estimation method but also because the intrinsic characteristics, described for instance by morphometry, vary considerably inside these extensive inner coastal types. As for phytoplankton biomass, the reference values were less accurate than in the case of Chl, because it was possible to estimate reference conditions for biomass only by using the reconstructed Chl values, not the historical Secchi observations. An paleoecological approach was also applied to estimate annual average reference conditions for Chl. In Laajalahti, an urban embayment off Helsinki, strongly loaded by municipal waste waters in the 1960s and 1970s, reference conditions prevailed in the mid- and late 1800s. The recovery of the bay from pollution has been delayed as a consequence of benthic release of nutrients. Laajalahti will probably not achieve the good quality objectives of the WFD on time. The third criterion (iii) was associated with coastal management including the resources it has available. Analyses of Chl are cheap and fast to carry out compared to the analyses of phytoplankton biomass and species composition; the fact which has an effect on number of samples to be taken and thereby on the reliability of assessments. However, analyses on phytoplankton biomass and species composition provide more metrics for ecological classification, the metrics which reveal various aspects of eutrophication contrary to what Chl alone does.
Resumo:
The need for special education (SE) is increasing. The majority of those whose problems are due to neurodevelopmental disorders have no specific aetiology. The aim of this study was to evaluate the contribution of prenatal and perinatal factors and factors associated with growth and development to later need for full-time SE and to assess joint structural and volumetric brain alterations among subjects with unexplained, familial need for SE. A random sample of 900 subjects in full-time SE allocated into three levels of neurodevelopmental problems and 301 controls in mainstream education (ME) provided data on socioeconomic factors, pregnancy, delivery, growth, and development. Of those, 119 subjects belonging to a sibling-pair in full-time SE with unexplained aetiology and 43 controls in ME underwent brain magnetic resonance imaging (MRI). Analyses of structural brain alterations and midsagittal area and diameter measurements were made. Voxel-based morphometry (VBM) analysis provided detailed information on regional grey matter, white matter, and cerebrospinal fluid (CSF) volume differences. Father’s age ≥ 40 years, low birth weight, male sex, and lower socio-economic status all increased the probability of SE placement. At age 1 year, one standard deviation score decrease in height raised the probability of SE placement by 40% and in head circumference by 28%. At infancy, the gross motor milestones differentiated the children. From age 18 months, the fine motor milestones and those related to speech and social skills became more important. Brain MRI revealed no specific aetiology for subjects in SE. However, they had more often ≥ 3 abnormal findings in MRIs (thin corpus callosum and enlarged cerebral and cerebellar CSF spaces). In VBM, subjects in full-time SE had smaller global white matter, CSF, and total brain volumes than controls. Compared with controls, subjects with intellectual disabilities had regional volume alterations (greater grey matter volumes in the anterior cingulate cortex bilaterally, smaller grey matter volume in left thalamus and left cerebellar hemisphere, greater white matter volume in the left fronto-parietal region, and smaller white matter volumes bilaterally in the posterior limbs of the internal capsules). In conclusion, the epidemiological studies emphasized several factors that increased the probability of SE placement, useful as a framework for interventional studies. The global and regional brain MRI findings provide an interesting basis for future investigations of learning-related brain structures in young subjects with cognitive impairments or intellectual disabilities of unexplained, familial aetiology.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
According to Meno s paradox we cannot inquire into what we do not know because we do not know what we are inquiring into. There are many ways to interpret the paradox but the central issue about our ability to reach truth is a profound one. In the dialogue Meno, Plato presents the paradox and an outline of a solution which enables us to reach knowledge (epistēmē) through philosophical discussion. During the last century Meno has often been considered transitional between Socratic thinking and Plato s own philosophy, and thus the dialogue has not been adequately interpreted as an integrated whole. Therefore the distinctive epistemology of the dialogue has not gained due notice. In this thesis the dialogue is analysed as an integrated whole and the philosophical interpretation also takes into account its dramatic features. The thesis emphasises the role of language and definitions in acquiring knowledge. Among the results concerning these subjects is a new interpretation of Socrates s defintion of shape (schēma). The theory of anamnēsis all learning is recollection in the Meno is argued to answer the paradox philosophically although Plato s presentation also contains playful and ironic elements. The background of the way Plato presents his case is that he appreciated the fact that no argument can plausibly demonstrate that argumentation is able to reach truth. In the Meno, Plato makes the earliest explicit distinction between knowledge and true belief in the history of Western philosophy. He also gives a definition of knowledge which is the basis of the so called classical definition of knowledge as justified true belief. In the Meno, true beliefs become knowledge when someone ties them down by reasoning about the explanation. The analysis of the epistemology of the dialogue from this perspective gives an interpretation which integrates the central concepts of the epistemology in the dialogue elenchos, anamnēsis and hypothetical inquiry into a unified whole which contains a plausible argument according to which the ignorant can reach knowledge through discussion. The conception that emerges by such an analysis is interesting both from the point of view of current interests and that of the history of philosophy. The method of knowledge acquisition in the Meno can, for example, be seen as a predecessor of modern scientific methods. The Meno is the earliest Greek mathematical text that has survived in its original form. The analysis presented in the thesis of the geometric passages in the dialogue provides new results both concerning Socrates s geometry lesson with the slave and the example presenting the hypothetical method. Concerning the latter, a new interpretation is presented. Keywords: anamnēsis, epistēmē, knowledge, Meno s paradox, Plato
Resumo:
Based on the Aristotelian criterion referred to as 'abductio', Peirce suggests a method of hypothetical inference, which operates in a different way than the deductive and inductive methods. “Abduction is nothing but guessing” (Peirce, 7.219). This principle is of extreme value for the study of our understanding of mathematical self-similarity in both of its typical presentations: relative or absolute. For the first case, abduction incarnates the quantitative/qualitative relationships of a self-similar object or process; for the second case, abduction makes understandable the statistical treatment of self-similarity, 'guessing' the continuity of geometric features to the infinity through the use of a systematic stereotype (for instance, the assumption that the general shape of the Sierpiński triangle continuates identically into its particular shapes). The metaphor coined by Peirce, of an exact map containig itself the same exact map (a map of itself), is not only the most important precedent of Mandelbrot’s problem of measuring the boundaries of a continuous irregular surface with a logarithmic ruler, but also still being a useful abstraction for the conceptualisation of relative and absolute self-similarity, and its mechanisms of implementation. It is useful, also, for explaining some of the most basic geometric ontologies as mental constructions: in the notion of infinite convergence of points in the corners of a triangle, or the intuition for defining two parallel straight lines as two lines in a plane that 'never' intersect.
Resumo:
Tutkimuksessa mitataan porsastuotannon tuottavuuden kehitystä ProAgrian sikatilinpäätöstiloilla vuosina 2003–2008. Tuottavuutta mitataan Fisher-tuottavuusindeksillä, joka dekomponoidaan tekniseen, allokatiiviseen ja skaalatehokkuuteen sekä teknologiseen kehitykseen ja hintavaikutukseen. Koko aineistosta aggregoidulla tuottavuusindeksillä mitattuna tuottavuus kasvoi viidessä vuodessa yhteensä 14,3 % vuotuisen kasvun ollessa 2,7 %. Tuottajien keskimääräinen tuottavuusindeksi antaa lähes saman tuloksen: sen mukaan tuottavuus kasvaa yhteensä 14,7 %, mikä tekee 2,8 % vuodessa. Skaalatehokkuuden paraneminen havaitaan merkittävimmäksi tuottavuuskasvun lähteeksi. Skaalatehokkuus paranee aggregoidusti mitattuna 1,6 % vuodessa ja tiloilla keskimäärin 2,1 % vuodessa. Teknisen tehokkuuden koheneminen on toinen tuottavuuskasvua edistävä tekijä tutkimusjaksolla. Molemmilla mittaustavoilla nousu on keskimäärin 1,4 % vuodessa. Allokatiivinen tehokkuus laskee hieman: aggregoidusti mitattuna 0,1 % ja keskimäärin 0,4 % vuodessa. Teknologinen kehitys tutkimusjaksolla on lievästi negatiivista, keskimäärin -0,1 % vuodessa. Vuosittaiset vaihtelut ovat kuitenkin voimakkaita. Hintojen muutokset eivät juuri ole vaikuttaneet tuottavuuden tasoon, sillä hintavaikutuksen vuotuiset muutokset jäävät jokaisena vuonna alle puoleen prosenttiin ja keskimääräinen vuotuinen muutos on -0,1 %. Keskeinen tuottavuuskasvua edistänyt tekijä näyttää olleen tilakoon kasvu, joka on parantanut rakenteellista tehokkuutta. Teknologisen kehityksen jääminen negatiiviseksi kuitenkin tarkoittaa, että paras havaittu tuottavuuden taso ei ole noussut lainkaan.
Resumo:
Paramagnetic, or open-shell, systems are often encountered in the context of metalloproteins, and they are also an essential part of molecular magnets. Nuclear magnetic resonance (NMR) spectroscopy is a powerful tool for chemical structure elucidation, but for paramagnetic molecules it is substantially more complicated than in the diamagnetic case. Before the present work, the theory of NMR of paramagnetic molecules was limited to spin-1/2 systems and it did not include relativistic corrections to the hyperfine effects. It also was not systematically expandable. --- The theory was first expanded by including hyperfine contributions up to the fourth power in the fine structure constant α. It was then reformulated and its scope widened to allow any spin state in any spatial symmetry. This involved including zero-field splitting effects. In both stages the theory was implemented into a separate analysis program. The different levels of theory were tested by demonstrative density functional calculations on molecules selected to showcase the relative strength of new NMR shielding terms. The theory was also tested in a joint experimental and computational effort to confirm assignment of 11 B signals. The new terms were found to be significant and comparable with the terms in the earlier levels of theory. The leading-order magnetic-field dependence of shielding in paramagnetic systems was formulated. The theory is now systematically expandable, allowing for higher-order field dependence and relativistic contributions. The prevailing experimental view of pseudocontact shift was found to be significantly incomplete, as it only includes specific geometric dependence, which is not present in most of the new terms introduced here. The computational uncertainty in density functional calculations of the Fermi contact hyperfine constant and zero-field splitting tensor sets a limit for quantitative prediction of paramagnetic shielding for now.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
Maintenance of breeding efficiency and high semen quality is essential for reproductive success in farm animals. Early recognition of possible inheritable factors causing infertility requires constant attention. This thesis focuses on describing different manifestations of impaired spermatogenesis, their impact on fertility and partly also their incidence in populations. The reasons for spermatogenic failure are various. An interruption of germ cell differentiation, spermatogenic arrest, can lead to infertility. The incidence of azoospermia was investigated in the 1996 2005 survey of Finnish AI and farm breeding boars. We focused on the diagnosis, testicular morphometry and the possible reasons for the condition. The incidence of azoospermia was significantly higher in Yorkshire boars than in the Landrace breed. The most common diagnosis in Yorkshire boars was germ cell arrest at the primary spermatocyte level. The second most frequent diagnosis in Yorkshire boars was segmental aplasia of the Wolffian ducts with idiopathic epididymal obstruction. Other reasons for azoospermia were infrequent. In the second study we investigated the incidence of two relatively well-defined specific sperm defects in Finnish Yorkshire and Landrace boars during the same survey, the immotile short-tail sperm (ISTS) defect and the knobbed acrosome (KA) defect. In the Finnish Yorkshire boars the inherited ISTS defect, and the probably inherited KA defect, were important causes of infertility during 1996 2005. The ISTS defect was found in 7.6% and the KA defect in 0.8% of the Yorkshire boars. No Landrace boars were diagnosed with either of these two defects. In the third study we described a new sterilizing sperm defect in an oligoasthenoterazoospermic bull. Because of its morphological characteristics this defect was termed the multinuclear-multiflagellar sperm (MNMFS) defect. The number of Sertoli cells in the seminiferous tubuli was highly increased in the MNMFS bull compared with the number in normal bulls. In the following two studies we used a combined approach of fluorescence in situ hybridization (FISH), flow cytometry and morphometric studies to provide information on the cytogenetic background of macrocephalic bull spermatozoa. We described cellular features of diploid spermatozoa and compared the failures in the first and second meiotic divisions. In the last study we describe how the transplantation of testicular cells was used to determine whether spermatogonia derived from donor animals are able to colonize and produce motile spermatozoa in immune-competent unrelated boars suffering the ISTS defect. Transplantation resulted in complete focal spermatogenesis, indicated by the appearance of motile spermatozoa and confirmed by genotyping.
Resumo:
We study the following problem: given a geometric graph G and an integer k, determine if G has a planar spanning subgraph (with the original embedding and straight-line edges) such that all nodes have degree at least k. If G is a unit disk graph, the problem is trivial to solve for k = 1. We show that even the slightest deviation from the trivial case (e.g., quasi unit disk graphs or k = 1) leads to NP-hard problems.
Resumo:
This thesis is concerned with the area of vector-valued Harmonic Analysis, where the central theme is to determine how results from classical Harmonic Analysis generalize to functions with values in an infinite dimensional Banach space. The work consists of three articles and an introduction. The first article studies the Rademacher maximal function that was originally defined by T. Hytönen, A. McIntosh and P. Portal in 2008 in order to prove a vector-valued version of Carleson's embedding theorem. The boundedness of the corresponding maximal operator on Lebesgue-(Bochner) -spaces defines the RMF-property of the range space. It is shown that the RMF-property is equivalent to a weak type inequality, which does not depend for instance on the integrability exponent, hence providing more flexibility for the RMF-property. The second article, which is written in collaboration with T. Hytönen, studies a vector-valued Carleson's embedding theorem with respect to filtrations. An earlier proof of the dyadic version assumed that the range space satisfies a certain geometric type condition, which this article shows to be also necessary. The third article deals with a vector-valued generalizations of tent spaces, originally defined by R. R. Coifman, Y. Meyer and E. M. Stein in the 80's, and concerns especially the ones related to square functions. A natural assumption on the range space is then the UMD-property. The main result is an atomic decomposition for tent spaces with integrability exponent one. In order to suit the stochastic integrals appearing in the vector-valued formulation, the proof is based on a geometric lemma for cones and differs essentially from the classical proof. Vector-valued tent spaces have also found applications in functional calculi for bisectorial operators. In the introduction these three themes come together when studying paraproduct operators for vector-valued functions. The Rademacher maximal function and Carleson's embedding theorem were applied already by Hytönen, McIntosh and Portal in order to prove boundedness for the dyadic paraproduct operator on Lebesgue-Bochner -spaces assuming that the range space satisfies both UMD- and RMF-properties. Whether UMD implies RMF is thus an interesting question. Tent spaces, on the other hand, provide a method to study continuous time paraproduct operators, although the RMF-property is not yet understood in the framework of tent spaces.
Resumo:
An inverse problem for the wave equation is a mathematical formulation of the problem to convert measurements of sound waves to information about the wave speed governing the propagation of the waves. This doctoral thesis extends the theory on the inverse problems for the wave equation in cases with partial measurement data and also considers detection of discontinuous interfaces in the wave speed. A possible application of the theory is obstetric sonography in which ultrasound measurements are transformed into an image of the fetus in its mother's uterus. The wave speed inside the body can not be directly observed but sound waves can be produced outside the body and their echoes from the body can be recorded. The present work contains five research articles. In the first and the fifth articles we show that it is possible to determine the wave speed uniquely by using far apart sound sources and receivers. This extends a previously known result which requires the sound waves to be produced and recorded in the same place. Our result is motivated by a possible application to reflection seismology which seeks to create an image of the Earth s crust from recording of echoes stimulated for example by explosions. For this purpose, the receivers can not typically lie near the powerful sound sources. In the second article we present a sound source that allows us to recover many essential features of the wave speed from the echo produced by the source. Moreover, these features are known to determine the wave speed under certain geometric assumptions. Previously known results permitted the same features to be recovered only by sequential measurement of echoes produced by multiple different sources. The reduced number of measurements could increase the number possible applications of acoustic probing. In the third and fourth articles we develop an acoustic probing method to locate discontinuous interfaces in the wave speed. These interfaces typically correspond to interfaces between different materials and their locations are of interest in many applications. There are many previous approaches to this problem but none of them exploits sound sources varying freely in time. Our use of more variable sources could allow more robust implementation of the probing.