973 resultados para Default
Resumo:
In this paper we consider the process of discovering frequent episodes in event sequences. The most computationally intensive part of this process is that of counting the frequencies of a set of candidate episodes. We present two new frequency counting algorithms for speeding up this part. These, referred to as non-overlapping and non-inteleaved frequency counts, are based on directly counting suitable subsets of the occurrences of an episode. Hence they are different from the frequency counts of Mannila et al [1], where they count the number of windows in which the episode occurs. Our new frequency counts offer a speed-up factor of 7 or more on real and synthetic datasets. We also show how the new frequency counts can be used when the events in episodes have time-durations as well.
Resumo:
Discovering patterns in temporal data is an important task in Data Mining. A successful method for this was proposed by Mannila et al. [1] in 1997. In their framework, mining for temporal patterns in a database of sequences of events is done by discovering the so called frequent episodes. These episodes characterize interesting collections of events occurring relatively close to each other in some partial order. However, in this framework(and in many others for finding patterns in event sequences), the ordering of events in an event sequence is the only allowed temporal information. But there are many applications where the events are not instantaneous; they have time durations. Interesting episodesthat we want to discover may need to contain information regarding event durations etc. In this paper we extend Mannila et al.’s framework to tackle such issues. In our generalized formulation, episodes are defined so that much more temporal information about events can be incorporated into the structure of an episode. This significantly enhances the expressive capability of the rules that can be discovered in the frequent episode framework. We also present algorithms for discovering such generalized frequent episodes.
Resumo:
The financial crisis set off by the default of Lehman Brothers in 2008 leading to disastrous consequences for the global economy has focused attention on regulation and pricing issues related to credit derivatives. Credit risk refers to the potential losses that can arise due to the changes in the credit quality of financial instruments. These changes could be due to changes in the ratings, market price (spread) or default on contractual obligations. Credit derivatives are financial instruments designed to mitigate the adverse impact that may arise due to credit risks. However, they also allow the investors to take up purely speculative positions. In this article we provide a succinct introduction to the notions of credit risk, the credit derivatives market and describe some of the important credit derivative products. There are two approaches to pricing credit derivatives, namely the structural and the reduced form or intensity-based models. A crucial aspect of the modelling that we touch upon briefly in this article is the problem of calibration of these models. We hope to convey through this article the challenges that are inherent in credit risk modelling, the elegant mathematics and concepts that underlie some of the models and the importance of understanding the limitations of the models.
Resumo:
Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.
Resumo:
Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.
Resumo:
Software transactional memory(STM) is a promising programming paradigm for shared memory multithreaded programs. While STM offers the promise of being less error-prone and more programmer friendly compared to traditional lock-based synchronization, it also needs to be competitive in performance in order for it to be adopted in mainstream software. A major source of performance overheads in STM is transactional aborts. Conflict resolution and aborting a transaction typically happens at the transaction level which has the advantage that it is automatic and application agnostic. However it has a substantial disadvantage in that STM declares the entire transaction as conflicting and hence aborts it and re-executes it fully, instead of partially re-executing only those part(s) of the transaction, which have been affected due to the conflict. This "Re-execute Everything" approach has a significant adverse impact on STM performance. In order to mitigate the abort overheads, we propose a compiler aided Selective Reconciliation STM (SR-STM) scheme, wherein certain transactional conflicts can be reconciled by performing partial re-execution of the transaction. Ours is a selective hybrid approach which uses compiler analysis to identify those data accesses which are legal and profitable candidates for reconciliation and applies partial re-execution only to these candidates selectively while other conflicting data accesses are handled by the default STM approach of abort and full re-execution. We describe the compiler analysis and code transformations required for supporting selective reconciliation. We find that SR-STM is effective in reducing the transactional abort overheads by improving the performance for a set of five STAMP benchmarks by 12.58% on an average and up to 22.34%.
Resumo:
In a typical enterprise WLAN, a station has a choice of multiple access points to associate with. The default association policy is based on metrics such as Re-ceived Signal Strength(RSS), and “link quality” to choose a particular access point among many. Such an approach can lead to unequal load sharing and diminished system performance. We consider the RAT (Rate And Throughput) policy [1] which leads to better system performance. The RAT policy has been implemented on home-grown centralized WLAN controller, ADWISER [2] and we demonstrate that the RAT policy indeed provides a better system performance.
Resumo:
Concentration of greenhouse gases (GHG) in the atmosphere has been increasing rapidly during the last century due to ever increasing anthropogenic activities resulting in significant increases in the temperature of the Earth causing global warming. Major sources of GHG are forests (due to human induced land cover changes leading to deforestation), power generation (burning of fossil fuels), transportation (burning fossil fuel), agriculture (livestock, farming, rice cultivation and burning of crop residues), water bodies (wetlands), industry and urban activities (building, construction, transport, solid and liquid waste). Aggregation of GHG (CO2 and non-CO2 gases), in terms of Carbon dioxide equivalent (CO(2)e), indicate the GHG footprint. GHG footprint is thus a measure of the impact of human activities on the environment in terms of the amount of greenhouse gases produced. This study focuses on accounting of the amount of three important greenhouses gases namely carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) and thereby developing GHG footprint of the major cities in India. National GHG inventories have been used for quantification of sector-wise greenhouse gas emissions. Country specific emission factors are used where all the emission factors are available. Default emission factors from IPCC guidelines are used when there are no country specific emission factors. Emission of each greenhouse gas is estimated by multiplying fuel consumption by the corresponding emission factor. The current study estimates GHG footprint or GHG emissions (in terms of CO2 equivalent) for Indian major cities and explores the linkages with the population and GDP. GHG footprint (Aggregation of Carbon dioxide equivalent emissions of GHG's) of Delhi, Greater Mumbai, Kolkata, Chennai, Greater Bangalore, Hyderabad and Ahmedabad are found to be 38,633.2 Gg, 22,783.08 Gg, 14,812.10 Gg, 22,090.55 Gg, 19,796.5 Gg, 13,734.59 Gg and 91,24.45 Gg CO2 eq., respectively. The major contributors sectors are transportation sector (contributing 32%, 17.4%, 13.3%, 19.5%, 43.5%, 56.86% and 25%), domestic sector (contributing 30.26%, 37.2%, 42.78%, 39%, 21.6%, 17.05% and 27.9%) and industrial sector (contributing 7.9%, 7.9%, 17.66%, 20.25%, 1231%, 11.38% and 22.41%) of the total emissions in Delhi, Greater Mumbai, Kolkata, Chennai, Greater Bangalore, Hyderabad and Ahmedabad, respectively. Chennai emits 4.79 t of CO2 equivalent emissions per capita, the highest among all the cities followed by Kolkata which emits 3.29 t of CO2 equivalent emissions per capita. Also Chennai emits the highest CO2 equivalent emissions per GDP (2.55 t CO2 eq./Lakh Rs.) followed by Greater Bangalore which emits 2.18 t CO2 eq./Lakh Rs. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we propose a new state transition based embedding (STBE) technique for audio watermarking with high fidelity. Furthermore, we propose a new correlation based encoding (CBE) scheme for binary logo image in order to enhance the payload capacity. The result of CBE is also compared with standard run-length encoding (RLE) compression and Huffman schemes. Most of the watermarking algorithms are based on modulating selected transform domain feature of an audio segment in order to embed given watermark bit. In the proposed STBE method instead of modulating feature of each and every segment to embed data, our aim is to retain the default value of this feature for most of the segments. Thus, a high quality of watermarked audio is maintained. Here, the difference between the mean values (Mdiff) of insignificant complex cepstrum transform (CCT) coefficients of down-sampled subsets is selected as a robust feature for embedding. Mdiff values of the frames are changed only when certain conditions are met. Hence, almost 50% of the times, segments are not changed and still STBE can convey watermark information at receiver side. STBE also exhibits a partial restoration feature by which the watermarked audio can be restored partially after extraction of the watermark at detector side. The psychoacoustic model analysis showed that the noise-masking ratio (NMR) of our system is less than -10dB. As amplitude scaling in time domain does not affect selected insignificant CCT coefficients, strong invariance towards amplitude scaling attacks is also proved theoretically. Experimental results reveal that the proposed watermarking scheme maintains high audio quality and are simultaneously robust to general attacks like MP3 compression, amplitude scaling, additive noise, re-quantization, etc.
Resumo:
In the context of wireless sensor networks, we are motivated by the design of a tree network spanning a set of source nodes that generate packets, a set of additional relay nodes that only forward packets from the sources, and a data sink. We assume that the paths from the sources to the sink have bounded hop count, that the nodes use the IEEE 802.15.4 CSMA/CA for medium access control, and that there are no hidden terminals. In this setting, starting with a set of simple fixed point equations, we derive explicit conditions on the packet generation rates at the sources, so that the tree network approximately provides certain quality of service (QoS) such as end-to-end delivery probability and mean delay. The structures of our conditions provide insight on the dependence of the network performance on the arrival rate vector, and the topological properties of the tree network. Our numerical experiments suggest that our approximations are able to capture a significant part of the QoS aware throughput region (of a tree network), that is adequate for many sensor network applications. Furthermore, for the special case of equal arrival rates, default backoff parameters, and for a range of values of target QoS, we show that among all path-length-bounded trees (spanning a given set of sources and the data sink) that meet the conditions derived in the paper, a shortest path tree achieves the maximum throughput. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
A Circular Economy (CE) values material, technical or biological, as nutrient. CE thinking seeks to accelerate the conversion of technical nutrient cycles along the lines of biological nutrient cycles by re-designing systems till the scale of the economy. Though the notion of products being technical nutrient exists, its situation as an outcome of design intent is not contextually made. One objective of this article is to situate design and nutrient cycles of the earth system as and within natural cycles. This situation emphasizes the mechanism by which design affects nutrient availability to vital earth systems and draws attention to the functions that nutrients afford and serve by default before being embodied in products by human intent. The first principle of CE seeks to eliminate waste and re-purpose nutrients with minimal energy. Towards this, the historic trend of perceiving waste is drawn and Gestalts identified to arrive at the concept of tenancy and inform design. Tenancy is defined as the duration for which the nutrient embodied serves some purpose. Identifying the 6R scenarios as nutrient re-purposing functions, corresponding design strategies are stated.
Resumo:
Contenido: Editorial -- Interaction between a strategic mass media firm and a government / Julián Alberto Batista -- Políticas proteccionistas de la Argentina desde 2003 : del auge a la decadencia económica / Julio J. Nogués -- Lidiando con las estadísticas internacionales en las ciencias sociales / José María Dagnino Pastore ; Luis María Libonatti -- La relación virtuosa de la seguridad y la inversión extranjera directa en Colombia (1994-2013) / Catalina Gómez Toro -- Estimación de la probabilidad de default : un modelo probit para los bancos argentinos / Felipe Klein -- Aproximación a las causas de la desigualdad económica / Maximiliano Mozetic -- Reseñas
Resumo:
"Una economía en crecimiento: la década del ‘2000. A partir de la crisis de 2001, una serie de medidas como la fuerte devaluación del peso, la pesificación asimétrica, la pesificación de los contratos de empresas de servicios públicos y congelamiento de tarifas, y el default de la deuda pública fueron dando forma a una nueva organización económica. Un tipo de cambio muy depreciado y la sorprendente suba del precio de los commodities permitieron una recuperación de la producción y del crecimiento, si bien las exportaciones crecieron por tirón de demanda externa, particularmente desde Asia Pacífico. Otra medida fue reinstalar las retenciones a las exportaciones (2002) y se realizó el canje de deuda pública (2005). Se generaron los superávit gemelos que otorgaron credibilidad para las decisiones de inversión. La inflación se mantuvo controlada, sin passing through relevante y sin indexaciones salariales. La recuperación del empleo privado por la reactivación y la parcial sustitución de importaciones, más el efecto competitividad del tipo de cambio depreciado, junto a los planes sociales mejoraron la situación social..."
Resumo:
Report of Opening Session (pdf 58 KB) Report of Governing Council Meeting (pdf 244 KB) Report of 2003 interim Governing Council meeting Tenth Anniversary PICES Organization Review Report of the Finance and Administration Committee (pdf 102 KB) 2002 Auditor's report to the Organization Review of PICES Publication Program Reports of Science Board and Committees: Science Board/Governing Council interim meeting (pdf 81 KB) Science Board (pdf 95 KB) Study Group on PICES Capacity Building Biological Oceanography Committee (pdf 65 KB) Advisory Panel on Micronekton sampling gear intercalibration experiment Advisory Panel on Marine birds and mammals Fishery Science Committee (pdf 41 KB) Working Group 16 on Climate change, shifts to fish production, and fisheries management Marine Environmental Quality Committee (pdf 76 KB) Working Group 15 on Ecology of Harmful Algal Blooms (HABs) in the North Pacific Physical Oceanography and Climate Committee (pdf 70 KB) Working Group 17 on Biogeochemical data integration and synthesis Advisory Panel on North Pacific Data Buoy Technical Committee on Data Exchange (pdf 32 KB) Implementation Panel on the CCCC Program (pdf 64 KB) Nemuro Experimental Planning Team (NEXT) BASS Task Team (pdf 35 KB) Advisory Panel on Iron Fertilization Experiment MODEL Task Team (pdf 29 KB) MONITOR Task Team (pdf 30KB) REX Task Team (pdf 25 KB) Documenting Scientific Sessions (pdf 164 KB) List of Participants (pdf 60 KB) List of Acronyms (pdf 21 KB)
Resumo:
Report of Opening Session (pdf 51 KB) Report of Governing Council Meeting(pdf 136 KB) Report of the Finance and Administration Committee (pdf 48 KB) Reports of Science Board and Committees: Science Board (pdf 71 KB) Biological Oceanography Committee (pdf 66 KB) Working Group 14: Effective sampling of micronekton Marine Birds and Mammals Advisory Panel Fishery Science Committee (pdf 36 KB) Working Group 16: Climate change, shifts to fish production, and fisheries management Marine Environmental Quality Committee (pdf 39 KB) Working Group 15: Ecology of Harmful Algal Blooms (HABs) in the North Pacific Physical Oceanography and Climate Committee (pdf 49 KB) North Pacific Data Buoy Advisory Panel Working Group 17: Biogeochemical data integration and synthesis Technical Committee on Data Exchange (pdf 29 KB) Implementation Panel on the CCCC Program (pdf 43 KB) BASS Task Team (pdf 30 KB) Iron Fertilization Experiment Advisory Panel MODEL Task Team (pdf 28 KB) MONITOR Task Team (pdf 34 KB) Summary of Continuous Plankton Recorder activities in 2002 REX Task Team (pdf 21 KB) Documenting Scientific Sessions (pdf 140 KB) List of Participants (pdf 59 KB) List of Acronyms (pdf 21 KB)