6 resultados para trade-offs
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The recent widespread diffusion of radio-frequency identification (RFID) applications operating in the UHF band has been supported by both the request for greater interrogation ranges and greater and faster data exchange. UHF-RFID systems, exploiting a physical interaction based on Electromagnetic propagation, introduce many problems that have not been fully explored for the previous generations of RFID systems (e.g. HF). Therefore, the availability of reliable tools for modeling and evaluating the radio-communication between Reader and Tag within an RFID radio-link are needed. The first part of the thesis discuss the impact of real environment on system performance. In particular an analytical closed form formulation for the back-scattered field from the Tag antenna and the formulation for the lower bound of the BER achievable at the Reader side will be presented, considering different possible electromagnetic impairments. By means of the previous formulations, of the analysis of the RFID link operating in near filed conditions and of some electromagnetic/system-level co-simulations, an in-depth study of the dimensioning parameters and the actual performance of the systems will be discussed and analyzed, showing some relevant properties and trade-offs in transponder and reader design. Moreover a new low cost approach to extend the read range of the RFID UHF passive systems will be discussed. Within the scope to check the reliability of the analysis approaches and of innovative proposals, some reference transponder antennas have been designed and extensive measurement campaign has been carried out with satisfactory results. Finally, some commercial ad-hoc transponder for industrial application have been designed within the cooperation with Datalogic s.p.a., some guidelines and results will be briefly presented.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
The dissertation contains five parts: An introduction, three major chapters, and a short conclusion. The First Chapter starts from a survey and discussion of the studies on corporate law and financial development literature. The commonly used methods in these cross-sectional analyses are biased as legal origins are no longer valid instruments. Hence, the model uncertainty becomes a salient problem. The Bayesian Model Averaging algorithm is applied to test the robustness of empirical results in Djankov et al. (2008). The analysis finds that their constructed legal index is not robustly correlated with most of the various stock market outcome variables. The second Chapter looks into the effects of minority shareholders protection in corporate governance regime on entrepreneurs' ex ante incentives to undertake IPO. Most of the current literature focuses on the beneficial part of minority shareholder protection on valuation, while overlooks its private costs on entrepreneur's control. As a result, the entrepreneur trade-offs the costs of monitoring with the benefits of cheap sources of finance when minority shareholder protection improves. The theoretical predictions are empirically tested using panel data and GMM-sys estimator. The third Chapter investigates the corporate law and corporate governance reform in China. The corporate law in China regards shareholder control as the means to the ends of pursuing the interests of stakeholders, which is inefficient. The Chapter combines the recent development of theories of the firm, i.e., the team production theory and the property rights theory, to solve such problem. The enlightened shareholder value, which emphasizes on the long term valuation of the firm, should be adopted as objectives of listed firms. In addition, a move from the mandatory division of power between shareholder meeting and board meeting to the default regime, is proposed.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
The energy harvesting research field has grown considerably in the last decade due to increasing interests in energy autonomous sensing systems, which require smart and efficient interfaces for extracting power from energy source and power management (PM) circuits. This thesis investigates the design trade-offs for minimizing the intrinsic power of PM circuits, in order to allow operation with very weak energy sources. For validation purposes, three different integrated power converter and PM circuits for energy harvesting applications are presented. They have been designed for nano-power operations and single-source converters can operate with input power lower than 1 μW. The first IC is a buck-boost converter for piezoelectric transducers (PZ) implementing Synchronous Electrical Charge Extraction (SECE), a non-linear energy extraction technique. Moreover, Residual Charge Inversion technique is exploited for extracting energy from PZ with weak and irregular excitations (i.e. lower voltage), and the implemented PM policy, named Two-Way Energy Storage, considerably reduces the start-up time of the converter, improving the overall conversion efficiency. The second proposed IC is a general-purpose buck-boost converter for low-voltage DC energy sources, up to 2.5 V. An ultra-low-power MPPT circuit has been designed in order to track variations of source power. Furthermore, a capacitive boost circuit has been included, allowing the converter start-up from a source voltage VDC0 = 223 mV. A nano-power programmable linear regulator is also included in order to provide a stable voltage to the load. The third IC implements an heterogeneous multisource buck-boost converter. It provides up to 9 independent input channels, of which 5 are specific for PZ (with SECE) and 4 for DC energy sources with MPPT. The inductor is shared among channels and an arbiter, designed with asynchronous logic to reduce the energy consumption, avoids simultaneous access to the buck-boost core, with a dynamic schedule based on source priority.