900 resultados para growth-survival trade-off
Resumo:
The thesis main topic is the conflict between disclosure in financial markets and the need for confidentiality of the firm. After a recognition of the major dynamics of information production and dissemination in the stock market, the analysis moves to the interactions between the information that a firm is tipically interested in keeping confidential, such as trade secrets or the data usually covered by patent protection, and the countervailing demand for disclosure arising from finacial markets. The analysis demonstrates that despite the seeming divergence between informational contents tipically disclosed to investors and information usually covered by intellectual property protection, the overlapping areas are nonetheless wide and the conflict between transparency in financial markets and the firm’s need for confidentiality arises frequently and sistematically. Indeed, the company’s disclosure policy is based on a continuous trade-off between the costs and the benefits related to the public dissemination of information. Such costs are mainly represented by the competitive harm caused by competitors’ access to sensitive data, while the benefits mainly refer to the lower cost of capital that the firm obtains as a consequence of more disclosure. Secrecy shields the value of costly produced information against third parties’ free riding and constitutes therefore a means to protect the firm’s incentives toward the production of new information and especially toward technological and business innovation. Excessively demanding standards of transparency in financial markets might hinder such set of incentives and thus jeopardize the dynamics of innovation production. Within Italian securities regulation, there are two sets of rules mostly relevant with respect to such an issue: the first one is the rule that mandates issuers to promptly disclose all price-sensitive information to the market on an ongoing basis; the second one is the duty to disclose in the prospectus all the information “necessary to enable investors to make an informed assessment” of the issuers’ financial and economic perspectives. Both rules impose high disclosure standards and have potentially unlimited scope. Yet, they have safe harbours aimed at protecting the issuer need for confidentiality. Despite the structural incompatibility between public dissemination of information and the firm’s need to keep certain data confidential, there are certain ways to convey information to the market while preserving at the same time the firm’s need for confidentality. Such means are insider trading and selective disclosure: both are based on mechanics whereby the process of price reaction to the new information takes place without any corresponding activity of public release of data. Therefore, they offer a solution to the conflict between disclosure and the need for confidentiality that enhances market efficiency and preserves at the same time the private set of incentives toward innovation.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
Combustion control is one of the key factors to obtain better performances and lower pollutant emissions for diesel, spark ignition and HCCI engines. An algorithm that allows estimating, as an example, the mean indicated torque for each cylinder, could be easily used in control strategies, in order to carry out cylinders trade-off, control the cycle to cycle variation, or detect misfires. A tool that allows evaluating the 50% of Mass Fraction Burned (MFB50), or the net Cumulative Heat Release (CHRNET), or the ROHR peak value (Rate of Heat Release), could be used to optimize spark advance or to detect knock in gasoline engines and to optimize injection pattern in diesel engines. Modern management systems are based on the control of the mean indicated torque produced by the engine: they need a real or virtual sensor in order to compare the measured value with the target one. Many studies have been performed in order to obtain an accurate and reliable over time torque estimation. The aim of this PhD activity was to develop two different algorithms: the first one is based on the instantaneous engine speed fluctuations measurement. The speed signal is picked up directly from the sensor facing the toothed wheel mounted on the engine for other control purposes. The engine speed fluctuation amplitudes depend on the combustion and on the amount of torque delivered by each cylinder. The second algorithm processes in-cylinder pressure signals in the angular domain. In this case a crankshaft encoder is not necessary, because the angular reference can be obtained using a standard sensor wheel. The results obtained with these two methodologies are compared in order to evaluate which one is suitable for on board applications, depending on the accuracy required.
Resumo:
Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
Questa tesi tratta dell'ottimizzazione dei componenti di potenza di un convertitore flyback al fine di massimizzare l'efficienza di un caricabatteria per batterie al piombo. Le perdite vengono studiate all'interno di uno spazio di progetto che individua due variabili libere (rapporto spire e indice di ondulazione della corrente). Diverse mappe con le stime di dissipazione derivante da tutti i contributi, permettono l'individuazione del punto ottimo di progetto e consentono la corretta valutazione dei trade-off. Il risultato dell'analisi ha portato al progetto di un dimostratore in cui si è potuto verificare la correttezza del valore di efficienza predetta, in questo modo migliorando le performance di un prodotto esistente.
Resumo:
Photovoltaic (PV) conversion is the direct production of electrical energy from sun without involving the emission of polluting substances. In order to be competitive with other energy sources, cost of the PV technology must be reduced ensuring adequate conversion efficiencies. These goals have motivated the interest of researchers in investigating advanced designs of crystalline silicon solar (c-Si) cells. Since lowering the cost of PV devices involves the reduction of the volume of semiconductor, an effective light trapping strategy aimed at increasing the photon absorption is required. Modeling of solar cells by electro-optical numerical simulation is helpful to predict the performance of future generations devices exhibiting advanced light-trapping schemes and to provide new and more specific guidelines to industry. The approaches to optical simulation commonly adopted for c-Si solar cells may lead to inaccurate results in case of thin film and nano-stuctured solar cells. On the other hand, rigorous solvers of Maxwell equations are really cpu- and memory-intensive. Recently, in optical simulation of solar cells, the RCWA method has gained relevance, providing a good trade-off between accuracy and computational resources requirement. This thesis is a contribution to the numerical simulation of advanced silicon solar cells by means of a state-of-the-art numerical 2-D/3-D device simulator, that has been successfully applied to the simulation of selective emitter and the rear point contact solar cells, for which the multi-dimensionality of the transport model is required in order to properly account for all physical competing mechanisms. In the second part of the thesis, the optical problems is discussed. Two novel and computationally efficient RCWA implementations for 2-D simulation domains as well as a third RCWA for 3-D structures based on an eigenvalues calculation approach have been presented. The proposed simulators have been validated in terms of accuracy, numerical convergence, computation time and correctness of results.
Resumo:
Das Neurotrophin BDNF ist ein protektiver Faktor, der das Wachstum, die Differenzierung und das Überleben neuronaler Zellen fördert. Neben der neuronalen Expression wird BDNF auch peripher exprimiert, so auch in Endothelzellen. Dort stimuliert BDNF die Angiogenese und fördert das Endothelzellüberleben. Eine Regulation der BDNF-Expression unter pathologischen Bedingungen wie Epilepsie, M. Alzheimer, M. Parkinson, Depression und Ischämie ist bereits mehrfach beschrieben worden. Literaturdaten zeigen veränderte BDNF-Expressionen unter pathologischen Bedingungen zeitgleich mit einem erhöhten Spiegel des Tumornekrosefaktors (TNF-a) bzw. einer Aktivierung der Proteinkinase C (PKC). Ob ein erhöhter TNF-a-Spiegel bzw. die Aktivierung der PKC Ursache der veränderten BDNF-Expression ist, ist bisher noch nicht bekannt. In der vorliegenden Arbeit konnte gezeigt werden, dass sowohl TNF-a als auch eine Aktivierung der PKC in peripheren Endothelzellen die BDNF-Expression konzentrations- und zeitabhängig reduziert. Im Fall von TNF-a wird diese Reduktion über den TNF-a-Rezeptor 1 (TNFR1) vermittelt und auf dem Niveau der Transkription reguliert. Weiterhin konnte gezeigt werden, dass BDNF die Angiogenese-Aktivität von humanen Umbilikalvenen-Endothelzellen (HUVEC) in Abhängigkeit der BDNF-Rezeptoren TrkB und p75NTR stimuliert. TNF-a hingegen reduziert die Angiogenese in HUVEC. Bei der Regulation der BDNF-Expression durch den PKC-aktivierenden Phorbolester Phorbol-12-Myristat-13-Acetat (PMA) konnte eine Beteiligung der PKC-Isoformen d gezeigt werden. Die Verminderung der BDNF-Expression durch PKC-Aktivierung konnte durch Inhibitoren der PKC d aufgehoben werden. PMA hatte keine destabilisierende Wirkung auf die BDNF-mRNA. Auch hier wird BDNF durch PMA auf dem Niveau der Transkription reguliert. Weiterhin ist bisher eine pharmakologische Regulation der BDNF-Expression noch nicht näher untersucht worden. Erstmalig konnte eine Wirkung des b1-Adrenorezeptorblockers Nebivolol auf die BDNF-mRNA-Expression beobachtet werden. Nebivolol erhöht die BDNF-Expression in zerebralen Endothelzellen in vitro und im Mäuseherzen in vivo. Hierbei handelt es sich um eine substanzspezifische Wirkung von Nebivolol, die NO-unabhängig verläuft und nicht über den b3-Adrenozeptor vermittelt wird. Teile der klinisch beobachteten protektiven Wirkungen von Nebivolol könnten auf eine erhöhte BDNF-Expression zurückgeführt werden.
Resumo:
L’invarianza spaziale dei parametri di un modello afflussi-deflussi può rivelarsi una soluzione pratica e valida nel caso si voglia stimare la disponibilità di risorsa idrica di un’area. La simulazione idrologica è infatti uno strumento molto adottato ma presenta alcune criticità legate soprattutto alla necessità di calibrare i parametri del modello. Se si opta per l’applicazione di modelli spazialmente distribuiti, utili perché in grado di rendere conto della variabilità spaziale dei fenomeni che concorrono alla formazione di deflusso, il problema è solitamente legato all’alto numero di parametri in gioco. Assumendo che alcuni di questi siano omogenei nello spazio, dunque presentino lo stesso valore sui diversi bacini, è possibile ridurre il numero complessivo dei parametri che necessitano della calibrazione. Si verifica su base statistica questa assunzione, ricorrendo alla stima dell’incertezza parametrica valutata per mezzo di un algoritmo MCMC. Si nota che le distribuzioni dei parametri risultano in diversa misura compatibili sui bacini considerati. Quando poi l’obiettivo è la stima della disponibilità di risorsa idrica di bacini non strumentati, l’ipotesi di invarianza dei parametri assume ancora più importanza; solitamente infatti si affronta questo problema ricorrendo a lunghe analisi di regionalizzazione dei parametri. In questa sede invece si propone una procedura di cross-calibrazione che viene realizzata adottando le informazioni provenienti dai bacini strumentati più simili al sito di interesse. Si vuole raggiungere cioè un giusto compromesso tra lo svantaggio derivante dall’assumere i parametri del modello costanti sui bacini strumentati e il beneficio legato all’introduzione, passo dopo passo, di nuove e importanti informazioni derivanti dai bacini strumentati coinvolti nell’analisi. I risultati dimostrano l’utilità della metodologia proposta; si vede infatti che, in fase di validazione sul bacino considerato non strumentato, è possibile raggiungere un buona concordanza tra le serie di portata simulate e osservate.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
Diese Arbeit analysiert den Zusammenhang zwischen politischen Institutionen und wirtschaftlichen Reformen. Die verbreitete Meinung nimmt an, dass ein hohes Maß an politischer Kontrolle und Beschränkungen, etwa durch ein föderales System oder eine zweite Parlamentskammer, die Reformfähigkeit eines Landes negativ beeinflusst. Grundlage dieser Annahme sind die Schlussfolgerungen aus der Vetospieler-Theorie von George Tsebelis. Anhand des Reformverlaufs postkommunistischer Staaten zeigt diese Arbeit jedoch, dass der Zusammenhang zwischen politischer Beschränkung und Reformen nicht linear, sondern quadratisch ist. Ein Mittelweg zwischen einer frei waltenden Exekutive und einem System restriktiver checks and balances garantiert damit die größtmöglichen Fortschritte bei wirtschaftlichen Reformen von der Planwirtschaft zur Marktwirtschaft.
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
Group B Streptococcus (GBS), in its transition from commensal to pathogen, will encounter diverse host environments and thus require coordinately controlling its transcriptional responses to these changes. This work was aimed at better understanding the role of two component signal transduction systems (TCS) in GBS pathophysiology through a systematic screening procedure. We first performed a complete inventory and sensory mechanism classification of all putative GBS TCS by genomic analysis. Five TCS were further investigated by the generation of knock-out strains, and in vitro transcriptome analysis identified genes regulated by these systems, ranging from 0.1-3% of the genome. Interestingly, two sugar phosphotransferase systems appeared differently regulated in the knock-out mutant of TCS-16, suggesting an involvement in monitoring carbon source availability. High throughput analysis of bacterial growth on different carbon sources showed that TCS-16 was necessary for growth of GBS on fructose-6-phosphate. Additional transcriptional analysis provided further evidence for a stimulus-response circuit where extracellular fructose-6-phosphate leads to autoinduction of TCS-16 with concomitant dramatic up-regulation of the adjacent operon encoding a phosphotransferase system. The TCS-16-deficient strain exhibited decreased persistence in a model of vaginal colonization and impaired growth/survival in the presence of vaginal mucoid components. All mutant strains were also characterized in a murine model of systemic infection, and inactivation of TCS-17 (also known as RgfAC) resulted in hypervirulence. Our data suggest a role for the previously unknown TCS-16, here named FspSR, in bacterial fitness and carbon metabolism during host colonization, and also provide experimental evidence for TCS-17/RgfAC involvement in virulence.
Resumo:
In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE.
Resumo:
This dissertation deals with the design and the characterization of novel reconfigurable silicon-on-insulator (SOI) devices to filter and route optical signals on-chip. Design is carried out through circuit simulations based on basic circuit elements (Building Blocks, BBs) in order to prove the feasibility of an approach allowing to move the design of Photonic Integrated Circuits (PICs) toward the system level. CMOS compatibility and large integration scale make SOI one of the most promising material to realize PICs. The concepts of generic foundry and BB based circuit simulations for the design are emerging as a solution to reduce the costs and increase the circuit complexity. To validate the BB based approach, the development of some of the most important BBs is performed first. A novel tunable coupler is also presented and it is demonstrated to be a valuable alternative to the known solutions. Two novel multi-element PICs are then analysed: a narrow linewidth single mode resonator and a passband filter with widely tunable bandwidth. Extensive circuit simulations are carried out to determine their performance, taking into account fabrication tolerances. The first PIC is based on two Grating Assisted Couplers in a ring resonator (RR) configuration. It is shown that a trade-off between performance, resonance bandwidth and device footprint has to be performed. The device could be employed to realize reconfigurable add-drop de/multiplexers. Sensitivity with respect to fabrication tolerances and spurious effects is however observed. The second PIC is based on an unbalanced Mach-Zehnder interferometer loaded with two RRs. Overall good performance and robustness to fabrication tolerances and nonlinear effects have confirmed its applicability for the realization of flexible optical systems. Simulated and measured devices behaviour is shown to be in agreement thus demonstrating the viability of a BB based approach to the design of complex PICs.