879 resultados para digital systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this paper was to search the state of the art from the Digital Libraries in Architecture and Urbanism in the Higher Education Institutions (IES) through conceptualizations and showing the importance of Digital Libraries in the disclosure and easing of information transferring. Questions about digital information architecture, usability, digital preservation and accessibility were approached. The research was made in the websites of Brazilian Universities, firstly to identify the institutions which offered the Architecture and Urbanism course, focusing on postgraduate education. After identifying the offering, the research was done by analyzing the contents, storage and dissemination and access to information, these libraries. It was found that the digital libraries are increasingly and taking part of organizations and educational institutions focusing on the knowledge dissemination releasing digitally information that may be needed for institution or the individual. A monitoring was done over of the physical and computational restructuring of the Board of Studies and Research in Architecture and Urbanism (Câmara de Estudos e Pesquisa em Arquitetura e Urbanismo, CEPAU), from the Architecture and Urbanism Course of the Federal University of Rio Grande do Norte (UFRN), showing the need of installing a Digital Library to integrate the databases of PPGAU s research groups, which today remain independent, with no interface among themselves. The research chosen area was Architecture and Urbanism, because there is a gap and little documentation about digital libraries in this area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last two decades, experimental progress in controlling cold atoms and ions now allows us to manipulate fragile quantum systems with an unprecedented degree of precision. This has been made possible by the ability to isolate small ensembles of atoms and ions from noisy environments, creating truly closed quantum systems which decouple from dissipative channels. However in recent years, several proposals have considered the possibility of harnessing dissipation in open systems, not only to cool degenerate gases to currently unattainable temperatures, but also to engineer a variety of interesting many-body states. This thesis will describe progress made towards building a degenerate gas apparatus that will soon be capable of realizing these proposals. An ultracold gas of ytterbium atoms, trapped by a species-selective lattice will be immersed into a Bose-Einstein condensate (BEC) of rubidium atoms which will act as a bath. Here we describe the challenges encountered in making a degenerate mixture of rubidium and ytterbium atoms and present two experiments performed on the path to creating a controllable open quantum system. The first experiment will describe the measurement of a tune-out wavelength where the light shift of $\Rb{87}$ vanishes. This wavelength was used to create a species-selective trap for ytterbium atoms. Furthermore, the measurement of this wavelength allowed us to extract the dipole matrix element of the $5s \rightarrow 6p$ transition in $\Rb{87}$ with an extraordinary degree of precision. Our method to extract matrix elements has found use in atomic clocks where precise knowledge of transition strengths is necessary to account for minute blackbody radiation shifts. The second experiment will present the first realization of a degenerate Bose-Fermi mixture of rubidium and ytterbium atoms. Using a three-color optical dipole trap (ODT), we were able to create a highly-tunable, species-selective potential for rubidium and ytterbium atoms which allowed us to use $\Rb{87}$ to sympathetically cool $\Yb{171}$ to degeneracy with minimal loss. This mixture is the first milestone creating the lattice-bath system and will soon be used to implement novel cooling schemes and explore the rich physics of dissipation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. Motion capture of a free flying insect is considered by using three synchronized high-speed cameras. A solid finite element representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. An objective function is formulated, and various shape difference definitions are considered. The proposed methodology is first studied for a synthetic case of a flexible cantilever structure undergoing large deformations, and then applied to a Manduca Sexta (hawkmoth) in free flight. The three-dimensional motions of this flapping system are reconstructed from image date collected by using three cameras. The complete deformation geometry of this system is analyzed. Finally, a computational investigation is carried out to understand the flow physics and aerodynamic performance by prescribing the body and wing motions in a fluid-body code. This thesis work contains one of the first set of such motion visualization and deformation analyses carried out for a hawkmoth in free flight. The tools and procedures used in this work are widely applicable to the studies of other flying animals with flexible wings as well as synthetic systems with flexible body elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of quantum degenerate gases has many applications in topics such as condensed matter dynamics, precision measurements and quantum phase transitions. We built an apparatus to create 87Rb Bose-Einstein condensates (BECs) and generated, via optical and magnetic interactions, novel quantum systems in which we studied the contained phase transitions. For our first experiment we quenched multi-spin component BECs from a miscible to dynamically unstable immiscible state. The transition rapidly drives any spin fluctuations with a coherent growth process driving the formation of numerous spin polarized domains. At much longer times these domains coarsen as the system approaches equilibrium. For our second experiment we explored the magnetic phases present in a spin-1 spin-orbit coupled BEC and the contained quantum phase transitions. We observed ferromagnetic and unpolarized phases which are stabilized by the spin-orbit coupling’s explicit locking between spin and motion. These two phases are separated by a critical curve containing both first-order and second-order transitions joined at a critical point. The narrow first-order transition gives rise to long-lived metastable states. For our third experiment we prepared independent BECs in a double-well potential, with an artificial magnetic field between the BECs. We transitioned to a single BEC by lowering the barrier while expanding the region of artificial field to cover the resulting single BEC. We compared the vortex distribution nucleated via conventional dynamics to those produced by our procedure, showing our dynamical process populates vortices much more rapidly and in larger number than conventional nucleation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the continued miniaturization and increasing performance of electronic devices, new technical challenges have arisen. One such issue is delamination occurring at critical interfaces inside the device. This major reliability issue can occur during the manufacturing process or during normal use of the device. Proper evaluation of the adhesion strength of critical interfaces early in the product development cycle can help reduce reliability issues and time-to-market of the product. However, conventional adhesion strength testing is inherently limited in the face of package miniaturization, which brings about further technical challenges to quantify design integrity and reliability. Although there are many different interfaces in today's advanced electronic packages, they can be generalized into two main categories: 1) rigid to rigid connections with a thin flexible polymeric layer in between, or 2) a thin film membrane on a rigid structure. Knowing that every technique has its own advantages and disadvantages, multiple testing methods must be enhanced and developed to be able to accommodate all the interfaces encountered for emerging electronic packaging technologies. For evaluating the adhesion strength of high adhesion strength interfaces in thin multilayer structures a novel adhesion test configuration called “single cantilever adhesion test (SCAT)” is proposed and implemented for an epoxy molding compound (EMC) and photo solder resist (PSR) interface. The test method is then shown to be capable of comparing and selecting the stronger of two potential EMC/PSR material sets. Additionally, a theoretical approach for establishing the applicable testing domain for a four-point bending test method was presented. For evaluating polymeric films on rigid substrates, major testing challenges are encountered for reducing testing scatter and for factoring in the potentially degrading effect of environmental conditioning on the material properties of the film. An advanced blister test with predefined area test method was developed that considers an elasto-plastic analytical solution and implemented for a conformal coating used to prevent tin whisker growth. The advanced blister testing with predefined area test method was then extended by employing a numerical method for evaluating the adhesion strength when the polymer’s film properties are unknown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider an LTE network where a secondary user acts as a relay, transmitting data to the primary user using a decode-and-forward mechanism, transparent to the base-station (eNodeB). Clearly, the relay can decode symbols more reliably if the employed precoder matrix indicators (PMIs) are known. However, for closed loop spatial multiplexing (CLSM) transmit mode, this information is not always embedded in the downlink signal, leading to a need for effective methods to determine the PMI. In this thesis, we consider 2x2 MIMO and 4x4 MIMO downlink channels corresponding to CLSM and formulate two techniques to estimate the PMI at the relay using a hypothesis testing framework. We evaluate their performance via simulations for various ITU channel models over a range of SNR and for different channel quality indicators (CQIs). We compare them to the case when the true PMI is known at the relay and show that the performance of the proposed schemes are within 2 dB at 10% block error rate (BLER) in almost all scenarios. Furthermore, the techniques add minimal computational overhead over existent receiver structure. Finally, we also identify scenarios when using the proposed precoder detection algorithms in conjunction with the cooperative decode-and-forward relaying mechanism benefits the PUE and improves the BLER performance for the PUE. Therefore, we conclude from this that the proposed algorithms as well as the cooperative relaying mechanism at the CMR can be gainfully employed in a variety of real-life scenarios in LTE networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show a simulation model for capacity analysis in mobile systems using a geographic information system (GIS) based tool, used for coverage calculations and frequency assignment, and MATLAB. The model was developed initially for “narrowband” CDMA and TDMA, but was modified for WCDMA. We show also some results for a specific case in “narrowband” CDMA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation explores three aspects of the economics and policy issues surrounding retail payments (low-value frequent payments): the microeconomic aspect, by measuring costs associated with retail payment instruments; the macroeconomic aspect, by quantifying the impact of the use of electronic rather than paper-based payment instruments on consumption and GDP; and the policy aspect, by identifying barriers that keep countries stuck with outdated payment systems, and recommending policy interventions to move forward with payments modernization. Payment system modernization has become a prominent part of the financial sector reform agenda in many advanced and developing countries. Greater use of electronic payments rather than cash and other paper-based instruments would have important economic and social benefits, including lower costs and thereby increased economic efficiency and higher incomes, while broadening access to the financial system, notably for people with moderate and low incomes. The dissertation starts with a general introduction on retail payments. Chapter 1 develops a theoretical model for measuring payments costs, and applies the model to Guyana—an emerging market in the midst of the transition from paper to electronic payments. Using primary survey data from Guyanese consumers, the results of the analysis indicate that annual costs related to the use of cash by consumers reach 2.5 percent of the country’s GDP. Switching to electronic payment instruments would provide savings amounting to 1 percent of GDP per year. Chapter 2 broadens the analysis to calculate the macroeconomic impacts of a move to electronic payments. Using a unique panel dataset of 76 countries across the 17-year span from 1998 to 2014 and a pooled OLS country fixed effects model, Chapter 2 finds that on average, use of debit and credit cards contribute USD 16.2 billion to annual global consumption, and USD 160 billion to overall annual global GDP. Chapter 3 provides an in-depth assessment of the Albanian payment cards and remittances market and recommends a set of incentives and regulations (both carrots and sticks) that would allow the country to modernize its payment system. Finally, the conclusion summarizes the lessons of the dissertation’s research and brings forward issues to be explored by future research in the retail payments area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occurrence frequency of failure events serve as critical indexes representing the safety status of dam-reservoir systems. Although overtopping is the most common failure mode with significant consequences, this type of event, in most cases, has a small probability. Estimation of such rare event risks for dam-reservoir systems with crude Monte Carlo (CMC) simulation techniques requires a prohibitively large number of trials, where significant computational resources are required to reach the satisfied estimation results. Otherwise, estimation of the disturbances would not be accurate enough. In order to reduce the computation expenses and improve the risk estimation efficiency, an importance sampling (IS) based simulation approach is proposed in this dissertation to address the overtopping risks of dam-reservoir systems. Deliverables of this study mainly include the following five aspects: 1) the reservoir inflow hydrograph model; 2) the dam-reservoir system operation model; 3) the CMC simulation framework; 4) the IS-based Monte Carlo (ISMC) simulation framework; and 5) the overtopping risk estimation comparison of both CMC and ISMC simulation. In a broader sense, this study meets the following three expectations: 1) to address the natural stochastic characteristics of the dam-reservoir system, such as the reservoir inflow rate; 2) to build up the fundamental CMC and ISMC simulation frameworks of the dam-reservoir system in order to estimate the overtopping risks; and 3) to compare the simulation results and the computational performance in order to demonstrate the ISMC simulation advantages. The estimation results of overtopping probability could be used to guide the future dam safety investigations and studies, and to supplement the conventional analyses in decision making on the dam-reservoir system improvements. At the same time, the proposed methodology of ISMC simulation is reasonably robust and proved to improve the overtopping risk estimation. The more accurate estimation, the smaller variance, and the reduced CPU time, expand the application of Monte Carlo (MC) technique on evaluating rare event risks for infrastructures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.