825 resultados para Publicly Traded
Resumo:
The present study aims at assessing the innovation strategies adopted within a regional economic system, the Italian region Emilia-Romagna, as it faced the challenges of a changing international scenario. As the strengthening of the regional innovative capabilities is regarded as a keystone to foster a new phase of economic growth, it is important also to understand how the local industrial, institutional, and academic actors have tackled the problem of innovation in the recent past. In this study we explore the approaches to innovation and the strategies adopted by the main regional actors through three different case studies. Chapter 1 provides a general survey of the innovative performance of the regional industries over the past two decades, as it emerges from statistical data and systematic comparisons at the national and European levels. The chapter also discusses the innovation policies that the regional government set up since 2001 in order to strengthen the collaboration among local economic actors, including universities and research centres. As mechanics is the most important regional industry, chapter 2 analyses the combination of knowledge and practices utilized in the period 1960s-1990s in the design of a particular kind of machinery produced by G.D S.p.A., a world-leader in the market of tobacco packaging machines. G.D is based in Bologna, the region’s capital, and is at the centre of the most important Italian packaging district. In chapter 3 the attention turns to the institutional level, focusing on how the local public administrations, and the local, publicly-owned utility companies have dealt with the creation of new telematic networks on the regional territory during the 1990s and 2000s. Finally, chapter 4 assesses the technology transfer carried out by the main university of the region – the University of Bologna – by focusing on the patenting activities involving its research personnel in the period 1960-2010.
Resumo:
This thesis focuses on two aspects of European economic integration: exchange rate stabilization between non-euro Countries and the Euro Area, and real and nominal convergence of Central and Eastern European Countries. Each Chapter covers these aspects from both a theoretical and empirical perspective. Chapter 1 investigates whether the introduction of the euro was accompanied by a shift in the de facto exchange rate policy of European countries outside the euro area, using methods recently developed by the literature to detect "Fear of Floating" episodes. I find that European Inflation Targeters have tried to stabilize the euro exchange rate, after its introduction; fixed exchange rate arrangements, instead, apart from official policy changes, remained stable. Finally, the euro seems to have gained a relevant role as a reference currency even outside Europe. Chapter 2 proposes an approach to estimate Central Bank preferences starting from the Central Bank's optimization problem within a small open economy, using Sweden as a case study, to find whether stabilization of the exchange rate played a role in the Monetary Policy rule of the Riksbank. The results show that it did not influence interest rate setting; exchange rate stabilization probably occurred as a result of increased economic integration and business cycle convergence. Chapter 3 studies the interactions between wages in the public sector, the traded private sector and the closed sector in ten EU Transition Countries. The theoretical literature on wage spillovers suggests that the traded sector should be the leader in wage setting, with non-traded sectors wages adjusting. We show that large heterogeneity across countries is present, and sheltered and public sector wages are often leaders in wage determination. This result is relevant from a policy perspective since wage spillovers, leading to costs growing faster than productivity, may affect the international cost competitiveness of the traded sector.
Resumo:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
Resumo:
The general theme of the present inquiry concerns the role of training and continuous updating of knowledge and skills in relation to the concept of employability and social vulnerability. The empirical research has affected the entire calendar year 2010, namely from 13 February 2010 to December 31, 2010: data refer to a very specific context or to the course funded by the Emilia Romagna region and targeted to employees in cassintegrazione notwithstanding domiciled in the region. The investigations were performed in a vocational training scheme accredited by the Emilia Romagna for the provision of publicly funded training courses. The quantitative data collected are limited to the region and distributed in all the provinces of Emilia Romagna; It addressed the issue of the role of continuing education throughout life and the importance of updating knowledge and skills, such as privileged instruments to address the instability of the labor market and what strategy to reduce the risk unemployment. Based on the different strategies that the employee puts in place during their professional careers, we introduce two concepts that are more common in the so-called knowledge society, namely the concept of social vulnerability and employability. In modern organizations becomes relevant knowledge they bring workers and the relationships that develop between people and allowing exponentially and disseminate such knowledge and skills. The knowledge thus becomes the first productive force, defined by Davenport and Prusak (1998) as "fluid combination of experience, values, contextual information and specialist knowledge that provides a framework for the evaluation and assimilation of new experience and new information ". Learning at work is a by stable explicit and conscious, and even enjoyable for everyone, especially outside of a training intervention. It then goes on to address the specific issue of training, under the current labor market increasingly deconstructed.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
From the late 1980s, the automation of sequencing techniques and the computer spread gave rise to a flourishing number of new molecular structures and sequences and to proliferation of new databases in which to store them. Here are presented three computational approaches able to analyse the massive amount of publicly avalilable data in order to answer to important biological questions. The first strategy studies the incorrect assignment of the first AUG codon in a messenger RNA (mRNA), due to the incomplete determination of its 5' end sequence. An extension of the mRNA 5' coding region was identified in 477 in human loci, out of all human known mRNAs analysed, using an automated expressed sequence tag (EST)-based approach. Proof-of-concept confirmation was obtained by in vitro cloning and sequencing for GNB2L1, QARS and TDP2 and the consequences for the functional studies are discussed. The second approach analyses the codon bias, the phenomenon in which distinct synonymous codons are used with different frequencies, and, following integration with a gene expression profile, estimates the total number of codons present across all the expressed mRNAs (named here "codonome value") in a given biological condition. Systematic analyses across different pathological and normal human tissues and multiple species shows a surprisingly tight correlation between the codon bias and the codonome bias. The third approach is useful to studies the expression of human autism spectrum disorder (ASD) implicated genes. ASD implicated genes sharing microRNA response elements (MREs) for the same microRNA are co-expressed in brain samples from healthy and ASD affected individuals. The different expression of a recently identified long non coding RNA which have four MREs for the same microRNA could disrupt the equilibrium in this network, but further analyses and experiments are needed.
Resumo:
The last decade has witnessed the establishment of a Standard Cosmological Model, which is based on two fundamental assumptions: the first one is the existence of a new non relativistic kind of particles, i. e. the Dark Matter (DM) that provides the potential wells in which structures create, while the second one is presence of the Dark Energy (DE), the simplest form of which is represented by the Cosmological Constant Λ, that sources the acceleration in the expansion of our Universe. These two features are summarized by the acronym ΛCDM, which is an abbreviation used to refer to the present Standard Cosmological Model. Although the Standard Cosmological Model shows a remarkably successful agreement with most of the available observations, it presents some longstanding unsolved problems. A possible way to solve these problems is represented by the introduction of a dynamical Dark Energy, in the form of the scalar field ϕ. In the coupled DE models, the scalar field ϕ features a direct interaction with matter in different regimes. Cosmic voids are large under-dense regions in the Universe devoided of matter. Being nearby empty of matter their dynamics is supposed to be dominated by DE, to the nature of which the properties of cosmic voids should be very sensitive. This thesis work is devoted to the statistical and geometrical analysis of cosmic voids in large N-body simulations of structure formation in the context of alternative competing cosmological models. In particular we used the ZOBOV code (see ref. Neyrinck 2008), a publicly available void finder algorithm, to identify voids in the Halos catalogues extraxted from CoDECS simulations (see ref. Baldi 2012 ). The CoDECS are the largest N-body simulations to date of interacting Dark Energy (DE) models. We identify suitable criteria to produce voids catalogues with the aim of comparing the properties of these objects in interacting DE scenarios to the standard ΛCDM model, at different redshifts. This thesis work is organized as follows: in chapter 1, the Standard Cosmological Model as well as the main properties of cosmic voids are intro- duced. In chapter 2, we will present the scalar field scenario. In chapter 3 the tools, the methods and the criteria by which a voids catalogue is created are described while in chapter 4 we discuss the statistical properties of cosmic voids included in our catalogues. In chapter 5 the geometrical properties of the catalogued cosmic voids are presented by means of their stacked profiles. In chapter 6 we summarized our results and we propose further developments of this work.
Resumo:
Questo studio propone un'esplorazione dei nessi tra processi migratori ed esperienze di salute e malattia a partire da un'indagine sulle migrazioni provenienti dall'America latina in Emilia-Romagna. Contemporaneamente indaga i termini del dibattito sulla diffusione della Malattia di Chagas, “infezione tropicale dimenticata” endemica in America centro-meridionale che, grazie all'incremento dei flussi migratori transnazionali, viene oggi riconfigurata come 'emergente' in alcuni contesti di immigrazione. Attraverso i paradigmi teorico-metodologici disciplinari dell'antropologia medica, della salute globale e degli studi sulle migrazioni, si è inteso indagare la natura della relazione tra “dimenticanza” ed “emergenza” nelle politiche che caratterizzano il contesto migratorio europeo e italiano nello specifico. Si sono analizzate questioni vincolate alla legittimità degli attori coinvolti nella ridefinizione del fenomeno in ambito pubblico; alle visioni che informano le strategie sanitarie di presa in carico dell'infezione; alle possibili ricadute di tali visioni nelle pratiche di cura. Parte della ricerca si è realizzata all'interno del reparto ospedaliero ove è stato implementato il primo servizio di diagnosi e trattamento per l'infezione in Emilia-Romagna. È stata pertanto realizzata una etnografia fuori/dentro al servizio, coinvolgendo i principali soggetti del campo di indagine -immigrati latinoamericani e operatori sanitari-, con lo scopo di cogliere visioni, logiche e pratiche a partire da un'analisi della legislazione che regola l'accesso al servizio sanitario pubblico in Italia. Attraverso la raccolta di narrazioni biografiche, lo studio ha contribuito a far luce su peculiari percorsi migratori e di vita nel contesto locale; ha permesso di riflettere sulla validità di categorie come quella di “latinoamericano” utilizzata dalla comunità scientifica in stretta correlazione con il Chagas; ha riconfigurato il senso di un approccio attento alle connotazioni culturali all'interno di un più ampio ripensamento delle forme di inclusione e di partecipazione finalizzate a dare asilo ai bisogni sanitari maggiormente percepiti e alle esperienze soggettive di malattia.
Resumo:
L’enigma della relazione tra Gesù e Giovanni il Battista ha da sempre stimolato l’immaginazione storica degli studiosi, suscitando una varietà di ipotesi e valutazioni, spesso assai diverse. Ciò nonostante, tutti concordano su un punto: che, almeno nella sua fase maggiore in Galilea, il ministero di Gesù fosse una realtà essenzialmente autonoma, diversa, originale e irriducibile rispetto alla missione di Giovanni. In controtendenza con questa “impostazione predefinita”, il presente studio sostiene la tesi secondo cui Gesù portò avanti la sua missione come intenzionale e programmatica prosecuzione dell’opera prematuramente interrotta di Giovanni. Nella prima parte, si esamina approfonditamente quale memoria della relazione sia conservata nelle fonti più antiche, cioè Q (qui analizzata con particolare attenzione) e Marco – a cui si aggiunge Matteo, che, in ragione dello stretto legame storico-sociologico con Q, offre un esempio illuminante di rinarrazione della memoria altamente originale eppure profondamente fedele. La conclusione è che la memoria più antica della relazione Gesù-Giovanni è profondamente segnata da aspetti di accordo, conformità e allineamento. Nella seconda parte si esaminano una serie di tradizioni che attestano che Gesù era percepito pubblicamente in relazione al Battista e che egli stesso condivideva e alimentava tale percezione riallacciandosi a lui in polemica con i suoi avversari, e dipingendolo come una figura di capitale importanza nella predicazione e nell’insegnamento a seguaci e discepoli. Infine, si argomenta l’esistenza di ampie e sostanziali aree di accordo tra i due a livello di escatologia, istruzioni etiche e programma sociale, missione penitenziale verso i peccatori e attività battesimale. L’ipotesi che Gesù portasse avanti l’attività riformatrice di Giovanni, in termini di una campagna purificatoria “penitenziale-battesimale-esorcistica” in preparazione dell’avvento di Dio, consente infine di armonizzare in modo soddisfacente i due aspetti più caratteristici dell’attività di Gesù (normalmente giustapposti, quando non contrapposti): escatologia e miracoli, il Gesù profeta e il Gesù taumaturgo.
Resumo:
This thesis investigates interactive scene reconstruction and understanding using RGB-D data only. Indeed, we believe that depth cameras will still be in the near future a cheap and low-power 3D sensing alternative suitable for mobile devices too. Therefore, our contributions build on top of state-of-the-art approaches to achieve advances in three main challenging scenarios, namely mobile mapping, large scale surface reconstruction and semantic modeling. First, we will describe an effective approach dealing with Simultaneous Localization And Mapping (SLAM) on platforms with limited resources, such as a tablet device. Unlike previous methods, dense reconstruction is achieved by reprojection of RGB-D frames, while local consistency is maintained by deploying relative bundle adjustment principles. We will show quantitative results comparing our technique to the state-of-the-art as well as detailed reconstruction of various environments ranging from rooms to small apartments. Then, we will address large scale surface modeling from depth maps exploiting parallel GPU computing. We will develop a real-time camera tracking method based on the popular KinectFusion system and an online surface alignment technique capable of counteracting drift errors and closing small loops. We will show very high quality meshes outperforming existing methods on publicly available datasets as well as on data recorded with our RGB-D camera even in complete darkness. Finally, we will move to our Semantic Bundle Adjustment framework to effectively combine object detection and SLAM in a unified system. Though the mathematical framework we will describe does not restrict to a particular sensing technology, in the experimental section we will refer, again, only to RGB-D sensing. We will discuss successful implementations of our algorithm showing the benefit of a joint object detection, camera tracking and environment mapping.
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
After the 2008 financial crisis, the financial innovation product Credit-Default-Swap (CDS) was widely blamed as the main cause of this crisis. CDS is one type of over-the-counter (OTC) traded derivatives. Before the crisis, the trading of CDS was very popular among the financial institutions. But meanwhile, excessive speculative CDSs transactions in a legal environment of scant regulation accumulated huge risks in the financial system. This dissertation is divided into three parts. In Part I, we discussed the primers of the CDSs and its market development, then we analyzed in detail the roles CDSs had played in this crisis based on economic studies. It is advanced that CDSs not just promoted the eruption of the crisis in 2007 but also exacerbated it in 2008. In part II, we asked ourselves what are the legal origins of this crisis in relation with CDSs, as we believe that financial instruments could only function, positive or negative, under certain legal institutional environment. After an in-depth inquiry, we observed that at least three traditional legal doctrines were eroded or circumvented by OTC derivatives. It is argued that the malfunction of these doctrines, on the one hand, facilitated the proliferation of speculative CDSs transactions; on the other hand, eroded the original risk-control legal mechanism. Therefore, the 2008 crisis could escalate rapidly into a global financial tsunami, which was out of control of the regulators. In Part III, we focused on the European Union’s regulatory reform towards the OTC derivatives market. In specific, EU introduced mandatory central counterparty clearing obligation for qualified OTC derivatives, and requires that all OTC derivatives shall be reported to a trade repository. It is observable that EU’s approach in re-regulating the derivatives market is different with the traditional administrative regulation, but aiming at constructing a new market infrastructure for OTC derivatives.
Resumo:
In der vorliegenden Arbeit wurde Neuroglobin (Ngb), ein evolutiv altes und in Metazoen konserviertes respiratorisches Protein, funktionell untersucht. Mittels des induzierbaren Tet on / Tet off Systems wurde Ngb ektopisch in der murinen Leber und im Gehirn überexprimiert. Die Transkriptome von Leber und Gehirnregionen Ngb-transgener Mäuse wurden mittels Microarrays und RNA-Seq im Vergleich zum Wildtyp analysiert, um Auswirkungen der Ngb-Überexpression zu ermitteln. Die Transkriptom-Analyse in Leber und Gehirn zeigte eine nur geringe Anzahl differenziell regulierter Gene und Stoffwechselwege nach Ngb-Überexpression. Ngb transgene Mäuse wurden CCl4-induziertem ROS-Stress ausgesetzt und die Leberfunktion untersucht. Zudem wurden primäre Hepatozyten-Kulturen etabliert und in diesen in vitro die extrinsische Apoptose induziert. Die Stressversuche zeigten: (i) Die Ngb-Überexpression hat keine protektive Wirkung in der Leber in vivo. (ii) In Leberzellen in vitro hingegen verminderte eine Ngb-Überexpression effizient die Aktivierung der apoptotischen Kaskade. Eine protektive Wirkung von Ngb ist vermutlich von betrachtetem Gewebe und dem verwendeten Stressor abhängig und keine generelle, selektierte Funktion des Proteins.rnWeiterhin wurde eine Ngb-KnockOut-Mauslinie mit einem LacZ-KnockIn-Genotyp etabliert. Hierbei zeigten die KO-Mäuse keinen offensichtlichen Phänotyp in ihrer Entwicklung, Fortpflanzung und Retina-Funktion. Unter Verwendung des LacZ-Knockin-Konstrukts konnten kontrovers diskutierte Ngb-Expressionsorte im adulten Mausgehirn (Hippocampus, Cortex und Cerebellum) sowie in Testes experimentell bestätigt werden. Parallel wurden öffentlich verfügbare RNA-Seq Datensätze ausgewertet, um die regionale Ngb-Expression systematisch ohne Antikörper-assoziierte Spezifitätsprobleme zu charakterisieren. Eine basale Ngb-Expression (RPKM: ~1-5) wurde im Hippocampus, Cortex und Cerebellum, sowie in Retina und Testes ermittelt. Eine 20-40fach höhere, starke Expression (RPKM: ~160) wurde im Hypothalamus bzw. im Hirnstamm nachgewiesen. Die „digitale“ Expressionsuntersuchung wurde mittels qRT-PCR und Western Blot bestätigt. Dieses Expressionsprofil von Ngb in der Maus weist auf eine besondere funktionelle Bedeutung von Ngb im Hypothalamus hin. Eine Funktion von Ngb in der Sauerstoffversorgung der Retina und eine generelle Funktion von Ngb in der Protektion von Neuronen sind mit dem beobachteten Expressionsspektrum weniger gut vereinbar.
Resumo:
In this thesis, we develop high precision tools for the simulation of slepton pair production processes at hadron colliders and apply them to phenomenological studies at the LHC. Our approach is based on the POWHEG method for the matching of next-to-leading order results in perturbation theory to parton showers. We calculate matrix elements for slepton pair production and for the production of a slepton pair in association with a jet perturbatively at next-to-leading order in supersymmetric quantum chromodynamics. Both processes are subsequently implemented in the POWHEG BOX, a publicly available software tool that contains general parts of the POWHEG matching scheme. We investigate phenomenological consequences of our calculations in several setups that respect experimental exclusion limits for supersymmetric particles and provide precise predictions for slepton signatures at the LHC. The inclusion of QCD emissions in the partonic matrix elements allows for an accurate description of hard jets. Interfacing our codes to the multi-purpose Monte-Carlo event generator PYTHIA, we simulate parton showers and slepton decays in fully exclusive events. Advanced kinematical variables and specific search strategies are examined as means for slepton discovery in experimentally challenging setups.
Resumo:
Data gathering, either for event recognition or for monitoring applications is the primary intention for sensor network deployments. In many cases, data is acquired periodically and autonomously, and simply logged onto secondary storage (e.g. flash memory) either for delayed offline analysis or for on demand burst transfer. Moreover, operational data such as connectivity information, node and network state is typically kept as well. Naturally, measurement and/or connectivity logging comes at a cost. Space for doing so is limited. Finding a good representative model for the data and providing clever coding of information, thus data compression, may be a means to use the available space to its best. In this paper, we explore the design space for data compression for wireless sensor and mesh networks by profiling common, publicly available algorithms. Several goals such as a low overhead in terms of utilized memory and compression time as well as a decent compression ratio have to be well balanced in order to find a simple, yet effective compression scheme.