985 resultados para multiplier of convolution
Resumo:
In this paper we study boundedness of the convolution operator in different Lorentz spaces. In particular, we obtain the limit case of the Young-O'Neil inequality in the classical Lorentz spaces. We also investigate the convolution operator in the weighted Lorentz spaces. Finally, norm inequalities for the potential operator are presented.
Resumo:
In previous work we have applied the environmental multi-region input-output (MRIO) method proposed by Turner et al (2007) to examine the ‘CO2 trade balance’ between Scotland and the Rest of the UK. In McGregor et al (2008) we construct an interregional economy-environment input-output (IO) and social accounting matrix (SAM) framework that allows us to investigate methods of attributing responsibility for pollution generation in the UK at the regional level. This facilitates analysis of the nature and significance of environmental spillovers and the existence of an environmental ‘trade balance’ between regions. While the existence of significant data problems mean that the quantitative results of this study should be regarded as provisional, we argue that the use of such a framework allows us to begin to consider questions such as the extent to which a devolved authority like the Scottish Parliament can and should be responsible for contributing to national targets for reductions in emissions levels (e.g. the UK commitment to the Kyoto Protocol) when it is limited in the way it can control emissions, particularly with respect to changes in demand elsewhere in the UK. However, while such analysis is useful in terms of accounting for pollution flows in the single time period that the accounts relate to, it is limited when the focus is on modelling the impacts of any marginal change in activity. This is because a conventional demand-driven IO model assumes an entirely passive supply-side in the economy (i.e. all supply is infinitely elastic) and is further restricted by the assumption of universal Leontief (fixed proportions) technology implied by the use of the A and multiplier matrices. In this paper we argue that where analysis of marginal changes in activity is required, a more flexible interregional computable general equilibrium approach that models behavioural relationships in a more realistic and theory-consistent manner, is more appropriate and informative. To illustrate our analysis, we compare the results of introducing a positive demand stimulus in the UK economy using both IO and CGE interregional models of Scotland and the rest of the UK. In the case of the latter, we demonstrate how more theory consistent modelling of both demand and supply side behaviour at the regional and national levels affect model results, including the impact on the interregional CO2 ‘trade balance’.
Resumo:
We examine how openness interacts with the coordination of consumption-leisure decisions in determining the equilibrium working hours and wage rate when there are leisure externalities (e.g., due to social interactions). The latter are modelled by allowing a worker’s marginal utility of leisure to be increasing in the leisure time taken by other workers. Coordination takes the form of internalising the leisure externality and other relevant constraints (e.g., labour demand). The extent of openness is measured by the degree of capital mobility. We find that: coordination lowers equilibrium work hours and raises the wage rate; there is a U-shaped (inverse-U-shaped) relationship between work hours (wages) and the degree of coordination; coordination is welfare improving; and, the gap between the coordinated and uncoordinated work hours (and the corresponding wage rates) is affected by the extent and nature of openness.
Resumo:
In an input-output context the impact of any particular industrial sector is commonly measured in terms of the output multiplier for that industry. Although such measures are routinely calculated and often used to guide regional industrial policy the behaviour of such measures over time is an area that has attracted little academic study. The output multipliers derived from any one table will have a distribution; for some industries the multiplier will be relatively high, for some it will be relatively low. The recentpublication of consistent input-output tables for the Scottish economy makes it possible to examine trends in this mdistribution over the ten year period 1998-2007. This is done by comparing the means and other summary measures of the distributions, the histograms and the cumulative densities. The results indicate a tendency for the multipliers to increase over the period. A Markov chain modelling approach suggests that this drift is a slow but long term phenomenon which appears not to tend to an equilibrium state. The prime reason for the increase in the output multipliers is traced to a decline in the relative importance of imported (both from the rest of the UK and the rest of the world) intermediate inputs used by Scottish industries. This suggests that models calibrated on the set of tables might have to be interpreted with caution.
Resumo:
We obtain upper and lower estimates of the (p; q) norm of the con-volution operator. The upper estimate sharpens the Young-type inequalities due to O'Neil and Stepanov.
Resumo:
We study the effects of government spending by using a structural, large dimensional, dynamic factor model. We find that the government spending shock is non-fundamental for the variables commonly used in the structural VAR literature, so that its impulse response functions cannot be consistently estimated by means of a VAR. Government spending raises both consumption and investment, with no evidence of crowding out. The impact multiplier is 1.7 and the long run multiplier is 0.6.
Resumo:
AbstractThe Chlamydiales order is an important bacterial phylum that comprises some of the most successful human pathogens such as Chlamydia trachomatis, the leading infectious cause of blindness worldwide. Since some years, several new bacteria related to Chlamydia have been discovered in clinical or environmental samples and might represent emerging pathogens. The genome sequencing of classical Chlamydia has brought invaluable information on these obligate intracellular bacteria otherwise difficult to study due to the lack of tools to perform basic genetic manipulation. The recent emergence of high-throughput sequencing technologies yielding millions of reads in a short time lowered the costs of genome sequencing and thus represented a unique opportunity to study Chlamydia-re\ated bacteria. Based on the sequencing and the analysis of Chlamydiales genomes, this thesis provides significant insights into the genetic determinants of the intracellular lifestyle, the pathogenicity, the metabolism and the evolution of Chlamydia-related bacteria. A first approach showed the efficacy of rapid sequencing coupled to proteomics to identify immunogenic proteins. This method, particularly useful for an emerging pathogen such as Parachlamydia acanthamoebae, enabled us to discover good candidates for the development of diagnostic tools that would permit to evaluate at larger scale the role of this bacterium in disease. Second, the complete genome of Waddlia chondrophila, a potential agent of miscarriage, encodes numerous virulence factors to manipulate its host cell and resist to environmental stresses. The reconstruction of metabolic pathways showed that the bacterium possesses extensive capabilities compared to related organisms. However, it is still incapable of synthesizing some essential components and thus has to import them from its host. Third, the genome comparison of Protochlamydia naegleriophila to its closest known relative Protochlamydia amoebophila revealed a particular evolutionary dynamic with the occurrence of an unexpected genome rearrangement. Fourth, a phylogenetic analysis of P. acanthamoebae and Legionella drancourtii identified several genes probably exchanged by horizontal gene transfer with other intracellular bacteria that might occur within their amoebal host. These genes often encode mechanisms for resistance to metal or toxic compounds. As a whole, the analysis of the different genomes enabled us to highlight a large diversity in size, GC percentage, repeat content as well as plasmid organization. The abundant genomic data obtained during this thesis have a wide impact since they provide the necessary bases for detailed investigations on countless aspects of the biology and the evolution of Chlamydia-related bacteria, whether in wet lab or by bioinformatical analyses.RésuméL'ordre des Chlamydiales est un important phylum bactérien qui comprend de nombreuses espèces pathogènes pour l'homme et les animaux, dont Chlamydia trachomatis, responsable du trachome, la cause majeure de cécité d'origine infectieuse à travers le monde. Durant ces dernières décennies, de nombreuses bactéries apparentées aux Chlamydia ont été découvertes dans des échantillons environnementaux ou cliniques mais leur éventuel rôle pathogène dans le développement de maladies reste peu connu. Ces bactéries sont des intracellulaires obligatoires car elles ont besoin d'une cellule hôte pour se multiplier, ce qui rend leur étude particulièrement difficile. Le développement de nouvelles technologies permettant de séquencer le génome d'un organisme rapidement et à moindre coût ainsi que l'essor des méthodes d'analyse s'y rapportant représentent une opportunité exceptionnelle d'étudier ces organismes. Dans ce contexte, cette thèse démontre l'utilité de la génomique pour développer de nouveaux outils diagnostiques ainsi que pour étudier le métabolisme de ces bactéries, leurs facteurs de virulence et leur évolution.Ainsi, une première approche a illustré l'utilité d'un séquençage rapide pour obtenir les informations nécessaires à l'identification de protéines qui sont reconnues par des anticorps humains ou animaux. Cette méthode, particulièrement utile pour un pathogène émergent tel que Parachlamydia acanthamoebae, a permis de découvrir de bons candidats pour le développement d'un outil diagnostique qui permettrait d'évaluer à plus large échelle le rôle de cette bactérie notamment dans la pneumonie. L'analyse du contenu génique de Waddlia chondrophila, un autre germe qui pourrait être impliqué dans les avortements et tes fausses-couches, a en outre mis en évidence la présence de nombreux facteurs connus qui lui permettent de manipuler son hôte. Cette bactérie possède de plus grandes capacités métaboliques que les autres Chlamydia, mais elle est incapable de synthétiser certains composants et doit donc les importer de son hôte pour subvenir à ses besoins. La comparaison du génome de Protochlamydia naegleriophila à son plus proche parent, Protochlamydia amoebophila, a dévoilé une évolution dynamique particulière avec l'occurrence d'un réarrangement majeur inattendu après la séparation de ces deux espèces. En outre, ces études ont montré l'occurrence de plusieurs transferts de gène avec d'autres organismes plus éloignés, notamment d'autres intracellulaires d'amibes, souvent pour l'acquisition de mécanismes de résistances à des composés toxiques. Les données génomiques acquises durant ce travail posent les fondements nécessaires a de nombreuses analyses qui permettront progressivement de mieux comprendre de nombreux aspects de ces bactéries fascinantes.
Resumo:
The two main alternative methods used to identify key sectors within the input-output approach, the Classical Multiplier method (CMM) and the Hypothetical Extraction method (HEM), are formally and empirically compared in this paper. Our findings indicate that the main distinction between the two approaches stems from the role of the internal effects. These internal effects are quantified under the CMM while under the HEM only external impacts are considered. In our comparison, we find, however that CMM backward measures are more influenced by within-block effects than the proposed forward indices under this approach. The conclusions of this comparison allow us to develop a hybrid proposal that combines these two existing approaches. This hybrid model has the advantage of making it possible to distinguish and disaggregate external effects from those that a purely internal. This proposal has also an additional interest in terms of policy implications. Indeed, the hybrid approach may provide useful information for the design of ''second best'' stimulus policies that aim at a more balanced perspective between overall economy-wide impacts and their sectoral distribution.
Resumo:
In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented
Resumo:
We study the effects of government spending on the distribution of consumption. We find a substantial degree of heterogeneity: consumption increases at the bottom and falls at the top of the distribution, implying a significant temporary reduction of consumption inequality. The effects of the shock display correlations of around -0.7/-0.9 with the percentage of stockholders within the decile. We interpret the results as in line and yielding support to models of limited participation where, while the Ricardian equivalence holds for rich households, for poor household, with no access to capital markets, the Keynesian multiplier is at work.
Resumo:
In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.
Resumo:
A long-standing controversy is whether autophagy is a bona fide cause of mammalian cell death. We used a cell-penetrating autophagy-inducing peptide, Tat-Beclin 1, derived from the autophagy protein Beclin 1, to investigate whether high levels of autophagy result in cell death by autophagy. Here we show that Tat-Beclin 1 induces dose-dependent death that is blocked by pharmacological or genetic inhibition of autophagy, but not of apoptosis or necroptosis. This death, termed "autosis," has unique morphological features, including increased autophagosomes/autolysosomes and nuclear convolution at early stages, and focal swelling of the perinuclear space at late stages. We also observed autotic death in cells during stress conditions, including in a subpopulation of nutrient-starved cells in vitro and in hippocampal neurons of neonatal rats subjected to cerebral hypoxia-ischemia in vivo. A chemical screen of ~5,000 known bioactive compounds revealed that cardiac glycosides, antagonists of Na(+),K(+)-ATPase, inhibit autotic cell death in vitro and in vivo. Furthermore, genetic knockdown of the Na(+),K(+)-ATPase α1 subunit blocks peptide and starvation-induced autosis in vitro. Thus, we have identified a unique form of autophagy-dependent cell death, a Food and Drug Administration-approved class of compounds that inhibit such death, and a crucial role for Na(+),K(+)-ATPase in its regulation. These findings have implications for understanding how cells die during certain stress conditions and how such cell death might be prevented.
Resumo:
We introduce simple nonparametric density estimators that generalize theclassical histogram and frequency polygon. The new estimators are expressed as linear combination of density functions that are piecewisepolynomials, where the coefficients are optimally chosen in order to minimize the integrated square error of the estimator. We establish the asymptotic behaviour of the proposed estimators, and study theirperformance in a simulation study.