32 resultados para bandwidth pricing
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
It is well documented that reducing blood pressure (BP) in hypertensive individuals reduces the risk of cardiovascular (CV) events. Despite this, many patients with hypertension remain untreated or inadequately treated, and fail to reach the recommended BP goals. Suboptimal BP control, whilst arising from multiple causes, is often due to poor patient compliance and/or persistence, and results in a significant health and economic burden on society. The use of fixed-dose combinations (FDCs) for the treatment of hypertension has the potential to increase patient compliance and persistence. When compared with antihypertensive monotherapies, FDCs may also offer equivalent or better efficacy, and the same or improved tolerability. As a result, FDCs have the potential to reduce both the CV event rates and the non-drug healthcare costs associated with hypertension. When FDCs are adopted for the treatment of hypertension, issues relating to copayment, formulary restrictions and therapeutic reference pricing must be addressed.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.
Resumo:
Financial markets play an important role in an economy performing various functions like mobilizing and pooling savings, producing information about investment opportunities, screening and monitoring investments, implementation of corporate governance, diversification and management of risk. These functions influence saving rates, investment decisions, technological innovation and, therefore, have important implications for welfare. In my PhD dissertation I examine the interplay of financial and product markets by looking at different channels through which financial markets may influence an economy.My dissertation consists of four chapters. The first chapter is a co-authored work with Martin Strieborny, a PhD student from the University of Lausanne. The second chapter is a co-authored work with Melise Jaud, a PhD student from the Paris School of Economics. The third chapter is co-authored with both Melise Jaud and Martin Strieborny. The last chapter of my PhD dissertation is a single author paper.Chapter 1 of my PhD thesis analyzes the effect of financial development on growth of contract intensive industries. These industries intensively use intermediate inputs that neither can be sold on organized exchange, nor are reference-priced (Levchenko, 2007; Nunn, 2007). A typical example of a contract intensive industry would be an industry where an upstream supplier has to make investments in order to customize a product for needs of a downstream buyer. After the investment is made and the product is adjusted, the buyer may refuse to meet a commitment and trigger ex post renegotiation. Since the product is customized to the buyer's needs, the supplier cannot sell the product to a different buyer at the original price. This is referred in the literature as the holdup problem. As a consequence, the individually rational suppliers will underinvest into relationship-specific assets, hurting the downstream firms with negative consequences for aggregate growth. The standard way to mitigate the hold up problem is to write a binding contract and to rely on the legal enforcement by the state. However, even the most effective contract enforcement might fail to protect the supplier in tough times when the buyer lacks a reliable source of external financing. This suggests the potential role of financial intermediaries, banks in particular, in mitigating the incomplete contract problem. First, financial products like letters of credit and letters of guarantee can substantially decrease a risk and transaction costs of parties. Second, a bank loan can serve as a signal about a buyer's true financial situation, an upstream firm will be more willing undertake relationship-specific investment knowing that the business partner is creditworthy and will abstain from myopic behavior (Fama, 1985; von Thadden, 1995). Therefore, a well-developed financial (especially banking) system should disproportionately benefit contract intensive industries.The empirical test confirms this hypothesis. Indeed, contract intensive industries seem to grow faster in countries with a well developed financial system. Furthermore, this effect comes from a more developed banking sector rather than from a deeper stock market. These results are reaffirmed examining the effect of US bank deregulation on the growth of contract intensive industries in different states. Beyond an overall pro-growth effect, the bank deregulation seems to disproportionately benefit the industries requiring relationship-specific investments from their suppliers.Chapter 2 of my PhD focuses on the role of the financial sector in promoting exports of developing countries. In particular, it investigates how credit constraints affect the ability of firms operating in agri-food sectors of developing countries to keep exporting to foreign markets.Trade in high-value agri-food products from developing countries has expanded enormously over the last two decades offering opportunities for development. However, trade in agri-food is governed by a growing array of standards. Sanitary and Phytosanitary standards (SPS) and technical regulations impose additional sunk, fixed and operating costs along the firms' export life. Such costs may be detrimental to firms' survival, "pricing out" producers that cannot comply. The existence of these costs suggests a potential role of credit constraints in shaping the duration of trade relationships on foreign markets. A well-developed financial system provides the funds to exporters necessary to adjust production processes in order to meet quality and quantity requirements in foreign markets and to maintain long-standing trade relationships. The products with higher needs for financing should benefit the most from a well functioning financial system. This differential effect calls for a difference-in-difference approach initially proposed by Rajan and Zingales (1998). As a proxy for demand for financing of agri-food products, the sanitary risk index developed by Jaud et al. (2009) is used. The empirical literature on standards and norms show high costs of compliance, both variable and fixed, for high-value food products (Garcia-Martinez and Poole, 2004; Maskus et al., 2005). The sanitary risk index reflects the propensity of products to fail health and safety controls on the European Union (EU) market. Given the high costs of compliance, the sanitary risk index captures the demand for external financing to comply with such regulations.The prediction is empirically tested examining the export survival of different agri-food products from firms operating in Ghana, Mali, Malawi, Senegal and Tanzania. The results suggest that agri-food products that require more financing to keep up with food safety regulation of the destination market, indeed sustain longer in foreign market, when they are exported from countries with better developed financial markets.Chapter 3 analyzes the link between financial markets and efficiency of resource allocation in an economy. Producing and exporting products inconsistent with a country's factor endowments constitutes a serious misallocation of funds, which undermines competitiveness of the economy and inhibits its long term growth. In this chapter, inefficient exporting patterns are analyzed through the lens of the agency theories from the corporate finance literature. Managers may pursue projects with negative net present values because their perquisites or even their job might depend on them. Exporting activities are particularly prone to this problem. Business related to foreign markets involves both high levels of additional spending and strong incentives for managers to overinvest. Rational managers might have incentives to push for exports that use country's scarce factors which is suboptimal from a social point of view. Export subsidies might further skew the incentives towards inefficient exporting. Management can divert the export subsidies into investments promoting inefficient exporting.Corporate finance literature stresses the disciplining role of outside debt in counteracting the internal pressures to divert such "free cash flow" into unprofitable investments. Managers can lose both their reputation and the control of "their" firm if the unpaid external debt triggers a bankruptcy procedure. The threat of possible failure to satisfy debt service payments pushes the managers toward an efficient use of available resources (Jensen, 1986; Stulz, 1990; Hart and Moore, 1995). The main sources of debt financing in the most countries are banks. The disciplining role of banks might be especially important in the countries suffering from insufficient judicial quality. Banks, in pursuing their rights, rely on comparatively simple legal interventions that can be implemented even by mediocre courts. In addition to their disciplining role, banks can promote efficient exporting patterns in a more direct way by relaxing credit constraints of producers, through screening, identifying and investing in the most profitable investment projects. Therefore, a well-developed domestic financial system, and particular banking system, would help to push a country's exports towards products congruent with its comparative advantage.This prediction is tested looking at the survival of different product categories exported to US market. Products are identified according to the Euclidian distance between their revealed factor intensity and the country's factor endowments. The results suggest that products suffering from a comparative disadvantage (labour-intensive products from capital-abundant countries) survive less on the competitive US market. This pattern is stronger if the exporting country has a well-developed banking system. Thus, a strong banking sector promotes exports consistent with a country comparative advantage.Chapter 4 of my PhD thesis further examines the role of financial markets in fostering efficient resource allocation in an economy. In particular, the allocative efficiency hypothesis is investigated in the context of equity market liberalization.Many empirical studies document a positive and significant effect of financial liberalization on growth (Levchenko et al. 2009; Quinn and Toyoda 2009; Bekaert et al., 2005). However, the decrease in the cost of capital and the associated growth in investment appears rather modest in comparison to the large GDP growth effect (Bekaert and Harvey, 2005; Henry, 2000, 2003). Therefore, financial liberalization may have a positive impact on growth through its effect on the allocation of funds across firms and sectors.Free access to international capital markets allows the largest and most profitable domestic firms to borrow funds in foreign markets (Rajan and Zingales, 2003). As domestic banks loose some of their best clients, they reoptimize their lending practices seeking new clients among small and younger industrial firms. These firms are likely to be more risky than large and established companies. Screening of customers becomes prevalent as the return to screening rises. Banks, ceteris paribus, tend to focus on firms operating in comparative-advantage sectors because they are better risks. Firms in comparative-disadvantage sectors finding it harder to finance their entry into or survival in export markets either exit or refrain from entering export markets. On aggregate, one should therefore expect to see less entry, more exit, and shorter survival on export markets in those sectors after financial liberalization.The paper investigates the effect of financial liberalization on a country's export pattern by comparing the dynamics of entry and exit of different products in a country export portfolio before and after financial liberalization.The results suggest that products that lie far from the country's comparative advantage set tend to disappear relatively faster from the country's export portfolio following the liberalization of financial markets. In other words, financial liberalization tends to rebalance the composition of a country's export portfolio towards the products that intensively use the economy's abundant factors.
Resumo:
Objective: Although 24-hour arterial blood pressure can be monitored in a free-moving animal using pressure telemetric transmitter mostly from Data Science International (DSI), accurate monitoring of 24-hour mouse left ventricular pressure (LVP) is not available because of its insufficient frequency response to a high frequency signal such as the maximum derivative of mouse LVP (LVdP/dtmax and LVdP/dtmin). The aim of the study was to develop a tiny implantable flow-through LVP telemetric transmitter for small rodent animals, which can be potentially adapted for human 24 hour BP and LVP accurate monitoring. Design and Method: The mouse LVP telemetric transmitter (Diameter: _12 mm, _0.4 g) was assembled by a pressure sensor, a passive RF telemetry chip, and to a 1.2F Polyurethane (PU) catheter tip. The device was developed in two configurations and compared with existing DSI system: (a) prototype-I: a new flow-through pressure sensor with wire link and (b) prototype-II: prototype-I plus a telemetry chip and its receiver. All the devices were applied in C57BL/6J mice. Data are mean_SEM. Results: A high frequency response (>100 Hz) PU heparin saline-filled catheter was inserted into mouse left ventricle via right carotid artery and implanted, LV systolic pressure (LVSP), LVdP/dtmax, and LVdP/dtmin were recorded on day2, 3, 4, 5, and 7 in conscious mice. The hemodynamic values were consistent and comparable (139_4 mmHg, 16634_319, - 12283_184 mmHg/s, n¼5) to one recorded by a validated Pebax03 catheter (138_2mmHg, 16045_443 and -12112_357 mmHg/s, n¼9). Similar LV hemodynamic values were obtained with Prototype-I. The same LVP waveforms were synchronically recorded by Notocord wire and Senimed wireless software through prototype-II in anesthetized mice. Conclusion: An implantable flow-through LVP transmitter (prototype-I) is generated for LVP accurate assessment in conscious mice. The prototype-II needs a further improvement on data transmission bandwidth and signal coupling distance to its receiver for accurate monitoring of LVP in a freemoving mouse.
Resumo:
Summary Throughout my thesis, I elaborate on how real and financing frictions affect corporate decision making under uncertainty, and I explore how firms time their investment and financing decisions given such frictions. While the macroeconomics literature has focused on the impact of real frictions on investment decisions assuming all equity financed firms, the financial economics literature has mainly focused on the study of financing frictions. My thesis therefore assesses the join interaction of real and financing frictions in firms' dynamic investment and financing decisions. My work provides a rationale for the documented poor empirical performance of neoclassical investment models based on the joint effect of real and financing frictions on investment. A major observation relies in how the infrequency of corporate decisions may affect standard empirical tests. My thesis suggests that the book to market sorts commonly used in the empirical asset pricing literature have economic content, as they control for the lumpiness in firms' optimal investment policies. My work also elaborates on the effects of asymmetric information and strategic interaction on firms' investment and financing decisions. I study how firms time their decision to raise public equity when outside investors lack information about their future investment prospects. I derive areal-options model that predicts either cold or hot markets for new stock issues conditional on adverse selection, and I provide a rational approach to study jointly the market timing of corporate decisions and announcement effects in stock returns. My doctoral dissertation therefore contributes to our understanding of how under real and financing frictions may bias standard empirical tests, elaborates on how adverse selection may induce hot and cold markets in new issues' markets, and suggests how the underlying economic behaviour of firms may induce alternative patterns in stock prices.
Resumo:
OBJECTIVE. The purpose of this study was to improve the blood-pool signal-to-noise ratio (SNR) and blood-myocardium contrast-to-noise ratio (CNR) of slow-infusion 3-T whole-heart coronary MR angiography (MRA).SUBJECTS AND METHODS. In 2D sensitivity encoding (SENSE), the number of acquired k-space lines is reduced, allowing less radiofrequency excitation per cardiac cycle and a longer TR. The former can be exploited for signal enhancement with a higher radiofrequency excitation angle, and the latter leads to noise reduction due to lower data-sampling bandwidth. Both effects contribute to SNR gain in coronary MRA when spatial and temporal resolution and acquisition time remain identical. Numeric simulation was performed to select the optimal 2D SENSE pulse sequence parameters and predict the SNR gain. Eleven patients underwent conventional unenhanced and the proposed 2D SENSE contrast-enhanced coronary MRA acquisition. Blood-pool SNR, blood-myocardium CNR, visible vessel length, vessel sharpness, and number of side branches were evaluated.RESULTS. Consistent with the numeric simulation, using 2D SENSE in contrast-enhanced coronary MRA resulted in significant improvement in aortic blood-pool SNR (unenhanced vs contrast-enhanced, 37.5 +/- 14.7 vs 121.3 +/- 44.0; p < 0.05) and CNR (14.4 +/- 6.9 vs 101.5 +/- 40.8; p < 0.05) in the patient sample. A longer length of left anterior descending coronary artery was visualized, but vessel sharpness, coronary artery coverage, and image quality score were not improved with the proposed approach.CONCLUSION. In combination with contrast administration, 2D SENSE was found effective in improving SNR and CNR in 3-T whole-heart coronary MRA. Further investigation of cardiac motion compensation is necessary to exploit the SNR and CNR advantages and to achieve submillimeter spatial resolution.
Resumo:
This PhD thesis addresses the issue of scalable media streaming in large-scale networking environments. Multimedia streaming is one of the largest sink of network resources and this trend is still growing as testified by the success of services like Skype, Netflix, Spotify and Popcorn Time (BitTorrent-based). In traditional client-server solutions, when the number of consumers increases, the server becomes the bottleneck. To overcome this problem, the Content-Delivery Network (CDN) model was invented. In CDN model, the server copies the media content to some CDN servers, which are located in different strategic locations on the network. However, they require heavy infrastructure investment around the world, which is too expensive. Peer-to-peer (P2P) solutions are another way to achieve the same result. These solutions are naturally scalable, since each peer can act as both a receiver and a forwarder. Most of the proposed streaming solutions in P2P networks focus on routing scenarios to achieve scalability. However, these solutions cannot work properly in video-on-demand (VoD) streaming, when resources of the media server are not sufficient. Replication is a solution that can be used in these situations. This thesis specifically provides a family of replication-based media streaming protocols, which are scalable, efficient and reliable in P2P networks. First, it provides SCALESTREAM, a replication-based streaming protocol that adaptively replicates media content in different peers to increase the number of consumers that can be served in parallel. The adaptiveness aspect of this solution relies on the fact that it takes into account different constraints like bandwidth capacity of peers to decide when to add or remove replicas. SCALESTREAM routes media blocks to consumers over a tree topology, assuming a reliable network composed of homogenous peers in terms of bandwidth. Second, this thesis proposes RESTREAM, an extended version of SCALESTREAM that addresses the issues raised by unreliable networks composed of heterogeneous peers. Third, this thesis proposes EAGLEMACAW, a multiple-tree replication streaming protocol in which two distinct trees, named EAGLETREE and MACAWTREE, are built in a decentralized manner on top of an underlying mesh network. These two trees collaborate to serve consumers in an efficient and reliable manner. The EAGLETREE is in charge of improving efficiency, while the MACAWTREE guarantees reliability. Finally, this thesis provides TURBOSTREAM, a hybrid replication-based streaming protocol in which a tree overlay is built on top of a mesh overlay network. Both these overlays cover all peers of the system and collaborate to improve efficiency and low-latency in streaming media to consumers. This protocol is implemented and tested in a real networking environment using PlanetLab Europe testbed composed of peers distributed in different places in Europe.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
We have explored the possibility of obtaining first-order permeability estimates for saturated alluvial sediments based on the poro-elastic interpretation of the P-wave velocity dispersion inferred from sonic logs. Modern sonic logging tools designed for environmental and engineering applications allow one for P-wave velocity measurements at multiple emitter frequencies over a bandwidth covering 5 to 10 octaves. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and typical emitter frequencies ranging from approximately 1 to 30 kHz, the observable velocity dispersion should be sufficiently pronounced to allow one for reliable first-order estimations of the permeability structure. The corresponding predictions have been tested on and verified for a borehole penetrating a typical surficial alluvial aquifer. In addition to multifrequency sonic logs, a comprehensive suite of nuclear and electrical logs, an S-wave log, a litholog, and a limited number laboratory measurements of the permeability from retrieved core material were also available. This complementary information was found to be essential for parameterizing the poro-elastic inversion procedure and for assessing the uncertainty and internal consistency of corresponding permeability estimates. Our results indicate that the thus obtained permeability estimates are largely consistent with those expected based on the corresponding granulometric characteristics, as well as with the available evidence form laboratory measurements. These findings are also consistent with evidence from ocean acoustics, which indicate that, over a frequency range of several orders-of-magnitude, the classical theory of poro-elasticity is generally capable of explaining the observed P-wave velocity dispersion in medium- to fine-grained seabed sediments
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
Contemporary coronary magnetic resonance angiography techniques suffer from signal-to-noise ratio (SNR) constraints. We propose a method to enhance SNR in gradient echo coronary magnetic resonance angiography by using sensitivity encoding (SENSE). While the use of sensitivity encoding to improve SNR seems counterintuitive, it can be exploited by reducing the number of radiofrequency excitations during the acquisition window while lowering the signal readout bandwidth, therefore improving the radiofrequency receive to radiofrequency transmit duty cycle. Under certain conditions, this leads to improved SNR. The use of sensitivity encoding for improved SNR in three-dimensional coronary magnetic resonance angiography is investigated using numerical simulations and an in vitro and an in vivo study. A maximum 55% SNR enhancement for coronary magnetic resonance angiography was found both in vitro and in vivo, which is well consistent with the numerical simulations. This method is most suitable for spoiled gradient echo coronary magnetic resonance angiography in which a high temporal and spatial resolution is required.
Resumo:
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).