5 resultados para CONGESTION
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This research was designed to answer the question of which direction the restructuring of financial regulators should take – consolidation or fragmentation. This research began by examining the need for financial regulation and its related costs. It then continued to describe what types of regulatory structures exist in the world; surveying the regulatory structures in 15 jurisdictions, comparing them and discussing their strengths and weaknesses. This research analyzed the possible regulatory structures using three methodological tools: Game-Theory, Institutional-Design, and Network-Effects. The incentives for regulatory action were examined in Chapter Four using game theory concepts. This chapter predicted how two regulators with overlapping supervisory mandates will behave in two different states of the world (where they can stand to benefit from regulating and where they stand to lose). The insights derived from the games described in this chapter were then used to analyze the different supervisory models that exist in the world. The problem of information-flow was discussed in Chapter Five using tools from institutional design. The idea is based on the need for the right kind of information to reach the hands of the decision maker in the shortest time possible in order to predict, mitigate or stop a financial crisis from occurring. Network effects and congestion in the context of financial regulation were discussed in Chapter Six which applied the literature referring to network effects in general in an attempt to conclude whether consolidating financial regulatory standards on a global level might also yield other positive network effects. Returning to the main research question, this research concluded that in general the fragmented model should be preferable to the consolidated model in most cases as it allows for greater diversity and information-flow. However, in cases in which close cooperation between two authorities is essential, the consolidated model should be used.
Resumo:
Background: Survival of patients with Acute Aortic Syndrome (AAS) may relate to the speed of diagnosis. Diagnostic delay is exacerbated by non classical presentations such as myocardial ischemia or acute heart failure (AHF). However little is known about clinical implications and pathophysiological mechanisms of Troponin T elevation and AHF in AAS. Methods and Results: Data were collected from a prospective metropolitan AAS registry (398 patients diagnosed between 2000 and 2013). Troponin T values (either standard or high sensitivity assay, HS) were available in 248 patients (60%) of the registry population; the overall frequency of troponin positivity was 28% (ranging from 16% to 54%, using standard or HS assay respectively, p = 0.001). Troponin positivity was associated with a twofold increased risk of long in-hospital diagnostic time (OR 1.92, 95% CI 1.05-3.52, p = 0.03), but not with in-hospital mortality. The combination of positive troponin and ACS-like ECG abnormalities resulted in a significantly increased risk of inappropriate therapy due to a misdiagnosis of ACS (OR 2.48, 95% CI 1.12-5.54, p = 0.02). Patients with AHF were identified by the presence of dyspnea as presentation symptom or radiological signs of pulmonary congestion or cardiogenic shock. The overall frequency of AHF was 28 % (32% type A vs. 20% type B AAS, p = 0.01). AHF was due to a variety of pathophysiological mechanisms including cardiac tamponade (26%), aortic regurgitation (25%), myocardial ischemia (17%), hypertensive crisis (10%). AHF was associated with increased surgical delay and with increased risk of in-hospital death (adjusted OR 1.97 95% CI1.13-3.37,p=0.01). Conclusions: Troponin positivity (particularly HS) was a frequent finding in AAS. Abnormal troponin values were strongly associated with ACS-like ECG findings, in-hospital diagnostic delay, and inappropriate therapy. AHF was associated with increased surgical delay and was an independent predictor of in-hospital mortality.
Resumo:
Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.
Resumo:
The efficiency of airport airside operations is often compromised by unplanned disruptive events of different kinds, such as bad weather, strikes or technical failures, which negatively influence the punctuality and regularity of operations, causing serious delays and unexpected congestion. They may provoke important impacts and economic losses on passengers, airlines and airport operators, and consequences may propagate in the air network throughout different airports. In order to identify strategies to cope with such events and minimize their impacts, it is crucial to understand how disruptive events affect airports’ performance. The research field related with the risk of severe air transport network disruptions and their impact on society is related to the concepts of vulnerability and resilience. The main objective of this project is to provide a framework that allows to evaluate performance losses and consequences due to unexpected disruptions affecting airport airside operations, supporting the development of a methodology for estimating vulnerability and resilience indicators for airport airside operations. The methodology proposed comprises three phases. In the first phase, airside operations are modelled in both the baseline and disrupted scenarios. The model includes all main airside processes and takes into consideration the uncertainties and dynamics of the system. In the second phase, the model is implemented by using a generic simulation software, AnyLogic. Vulnerability is evaluated by taking into consideration the costs related to flight delays, cancellations and diversions; resilience is determined as a function of the loss of capacity during the entire period of disruption. In the third phase, a Bayesian Network is built in which uncertain variables refer to airport characteristics and disruption type. The Bayesian Network expresses the conditional dependence among these variables and allows to predict the impacts of disruptions on an airside system, determining the elements which influence the system resilience the most.
Resumo:
Massive Internet of Things is expected to play a crucial role in Beyond 5G (B5G) wireless communication systems, offering seamless connectivity among heterogeneous devices without human intervention. However, the exponential proliferation of smart devices and IoT networks, relying solely on terrestrial networks, may not fully meet the demanding IoT requirements in terms of bandwidth and connectivity, especially in areas where terrestrial infrastructures are not economically viable. To unleash the full potential of 5G and B5G networks and enable seamless connectivity everywhere, the 3GPP envisions the integration of Non-Terrestrial Networks (NTNs) into the terrestrial ones starting from Release 17. However, this integration process requires modifications to the 5G standard to ensure reliable communications despite typical satellite channel impairments. In this framework, this thesis aims at proposing techniques at the Physical and Medium Access Control layers that require minimal adaptations in the current NB-IoT standard via NTN. Thus, firstly the satellite impairments are evaluated and, then, a detailed link budget analysis is provided. Following, analyses at the link and the system levels are conducted. In the former case, a novel algorithm leveraging time-frequency analysis is proposed to detect orthogonal preambles and estimate the signals’ arrival time. Besides, the effects of collisions on the detection probability and Bit Error Rate are investigated and Non-Orthogonal Multiple Access approaches are proposed in the random access and data phases. The system analysis evaluates the performance of random access in case of congestion. Various access parameters are tested in different satellite scenarios, and the performance is measured in terms of access probability and time required to complete the procedure. Finally, a heuristic algorithm is proposed to jointly design the access and data phases, determining the number of satellite passages, the Random Access Periodicity, and the number of uplink repetitions that maximize the system's spectral efficiency.