32 resultados para Default penalties


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A radical cyclization based methodology has been applied for the formal total synthesis of (+/-)-enterolactone (1), the first lignan isolated from human source. Bromoacetalization reaction of the cinnamyl alcohols 7 and 13 using ethyl vinyl ether and NBS, generated the bromoacetals 8 and 15. The 5-exo-trig radical cyclization reaction of the bromoacetals 8 and 15 with in situ generated catalytic tri-a-butyltin hydride and AIBN furnished a 3 : 2 diastereomeric mixture of the cyclic acetals 9 and 16. Sonochemically accelerated Jones oxidation of the cyclic acetals 9 and 16 yielded the gamma-butyrolactones 10 and 12 completing the formal total synthesis of (+/-)-enterolactone. Alternatively radical cyclization of the bromoacetate 17 furnished a 1 : 2 mixture of the lactone 10 and the reduced product 18.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using enhanced distributed channel access (EDCA). We build upon the fixed point analysis and performance insights. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures. The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the process of discovering frequent episodes in event sequences. The most computationally intensive part of this process is that of counting the frequencies of a set of candidate episodes. We present two new frequency counting algorithms for speeding up this part. These, referred to as non-overlapping and non-inteleaved frequency counts, are based on directly counting suitable subsets of the occurrences of an episode. Hence they are different from the frequency counts of Mannila et al [1], where they count the number of windows in which the episode occurs. Our new frequency counts offer a speed-up factor of 7 or more on real and synthetic datasets. We also show how the new frequency counts can be used when the events in episodes have time-durations as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discovering patterns in temporal data is an important task in Data Mining. A successful method for this was proposed by Mannila et al. [1] in 1997. In their framework, mining for temporal patterns in a database of sequences of events is done by discovering the so called frequent episodes. These episodes characterize interesting collections of events occurring relatively close to each other in some partial order. However, in this framework(and in many others for finding patterns in event sequences), the ordering of events in an event sequence is the only allowed temporal information. But there are many applications where the events are not instantaneous; they have time durations. Interesting episodesthat we want to discover may need to contain information regarding event durations etc. In this paper we extend Mannila et al.’s framework to tackle such issues. In our generalized formulation, episodes are defined so that much more temporal information about events can be incorporated into the structure of an episode. This significantly enhances the expressive capability of the rules that can be discovered in the frequent episode framework. We also present algorithms for discovering such generalized frequent episodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The financial crisis set off by the default of Lehman Brothers in 2008 leading to disastrous consequences for the global economy has focused attention on regulation and pricing issues related to credit derivatives. Credit risk refers to the potential losses that can arise due to the changes in the credit quality of financial instruments. These changes could be due to changes in the ratings, market price (spread) or default on contractual obligations. Credit derivatives are financial instruments designed to mitigate the adverse impact that may arise due to credit risks. However, they also allow the investors to take up purely speculative positions. In this article we provide a succinct introduction to the notions of credit risk, the credit derivatives market and describe some of the important credit derivative products. There are two approaches to pricing credit derivatives, namely the structural and the reduced form or intensity-based models. A crucial aspect of the modelling that we touch upon briefly in this article is the problem of calibration of these models. We hope to convey through this article the challenges that are inherent in credit risk modelling, the elegant mathematics and concepts that underlie some of the models and the importance of understanding the limitations of the models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we determine packet scheduling policies for efficient power management in Energy Harvesting Sensors (EHS) which have to transmit packets of high and low priorities over a fading channel. We assume that incoming packets are stored in a buffer and the quality of service for a particular type of message is determined by the expected waiting time of packets of that type of message. The sensors are constrained to work with the energy that they garner from the environment. We derive transmit policies which minimize the sum of expected waiting times of the two types of messages, weighted by penalties. First, we show that for schemes with a constant rate of transmission, under a decoupling approximation, a form of truncated channel inversion is optimal. Using this result, we derive optimal solutions that minimize the weighted sum of the waiting times in the different queues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mobile ad hoc networks (MANETs) is one of the successful wireless network paradigms which offers unrestricted mobility without depending on any underlying infrastructure. MANETs have become an exciting and im- portant technology in recent years because of the rapid proliferation of variety of wireless devices, and increased use of ad hoc networks in various applications. Like any other networks, MANETs are also prone to variety of attacks majorly in routing side, most of the proposed secured routing solutions based on cryptography and authentication methods have greater overhead, which results in latency problems and resource crunch problems, especially in energy side. The successful working of these mechanisms also depends on secured key management involving a trusted third authority, which is generally difficult to implement in MANET environ-ment due to volatile topology. Designing a secured routing algorithm for MANETs which incorporates the notion of trust without maintaining any trusted third entity is an interesting research problem in recent years. This paper propose a new trust model based on cognitive reasoning,which associates the notion of trust with all the member nodes of MANETs using a novel Behaviors-Observations- Beliefs(BOB) model. These trust values are used for detec- tion and prevention of malicious and dishonest nodes while routing the data. The proposed trust model works with the DTM-DSR protocol, which involves computation of direct trust between any two nodes using cognitive knowledge. We have taken care of trust fading over time, rewards, and penalties while computing the trustworthiness of a node and also route. A simulator is developed for testing the proposed algorithm, the results of experiments shows incorporation of cognitive reasoning for computation of trust in routing effectively detects intrusions in MANET environment, and generates more reliable routes for secured routing of data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (l(2)), absolute (l(1)), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as l(1), Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one. (C) 2013 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software transactional memory(STM) is a promising programming paradigm for shared memory multithreaded programs. While STM offers the promise of being less error-prone and more programmer friendly compared to traditional lock-based synchronization, it also needs to be competitive in performance in order for it to be adopted in mainstream software. A major source of performance overheads in STM is transactional aborts. Conflict resolution and aborting a transaction typically happens at the transaction level which has the advantage that it is automatic and application agnostic. However it has a substantial disadvantage in that STM declares the entire transaction as conflicting and hence aborts it and re-executes it fully, instead of partially re-executing only those part(s) of the transaction, which have been affected due to the conflict. This "Re-execute Everything" approach has a significant adverse impact on STM performance. In order to mitigate the abort overheads, we propose a compiler aided Selective Reconciliation STM (SR-STM) scheme, wherein certain transactional conflicts can be reconciled by performing partial re-execution of the transaction. Ours is a selective hybrid approach which uses compiler analysis to identify those data accesses which are legal and profitable candidates for reconciliation and applies partial re-execution only to these candidates selectively while other conflicting data accesses are handled by the default STM approach of abort and full re-execution. We describe the compiler analysis and code transformations required for supporting selective reconciliation. We find that SR-STM is effective in reducing the transactional abort overheads by improving the performance for a set of five STAMP benchmarks by 12.58% on an average and up to 22.34%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a typical enterprise WLAN, a station has a choice of multiple access points to associate with. The default association policy is based on metrics such as Re-ceived Signal Strength(RSS), and “link quality” to choose a particular access point among many. Such an approach can lead to unequal load sharing and diminished system performance. We consider the RAT (Rate And Throughput) policy [1] which leads to better system performance. The RAT policy has been implemented on home-grown centralized WLAN controller, ADWISER [2] and we demonstrate that the RAT policy indeed provides a better system performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concentration of greenhouse gases (GHG) in the atmosphere has been increasing rapidly during the last century due to ever increasing anthropogenic activities resulting in significant increases in the temperature of the Earth causing global warming. Major sources of GHG are forests (due to human induced land cover changes leading to deforestation), power generation (burning of fossil fuels), transportation (burning fossil fuel), agriculture (livestock, farming, rice cultivation and burning of crop residues), water bodies (wetlands), industry and urban activities (building, construction, transport, solid and liquid waste). Aggregation of GHG (CO2 and non-CO2 gases), in terms of Carbon dioxide equivalent (CO(2)e), indicate the GHG footprint. GHG footprint is thus a measure of the impact of human activities on the environment in terms of the amount of greenhouse gases produced. This study focuses on accounting of the amount of three important greenhouses gases namely carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) and thereby developing GHG footprint of the major cities in India. National GHG inventories have been used for quantification of sector-wise greenhouse gas emissions. Country specific emission factors are used where all the emission factors are available. Default emission factors from IPCC guidelines are used when there are no country specific emission factors. Emission of each greenhouse gas is estimated by multiplying fuel consumption by the corresponding emission factor. The current study estimates GHG footprint or GHG emissions (in terms of CO2 equivalent) for Indian major cities and explores the linkages with the population and GDP. GHG footprint (Aggregation of Carbon dioxide equivalent emissions of GHG's) of Delhi, Greater Mumbai, Kolkata, Chennai, Greater Bangalore, Hyderabad and Ahmedabad are found to be 38,633.2 Gg, 22,783.08 Gg, 14,812.10 Gg, 22,090.55 Gg, 19,796.5 Gg, 13,734.59 Gg and 91,24.45 Gg CO2 eq., respectively. The major contributors sectors are transportation sector (contributing 32%, 17.4%, 13.3%, 19.5%, 43.5%, 56.86% and 25%), domestic sector (contributing 30.26%, 37.2%, 42.78%, 39%, 21.6%, 17.05% and 27.9%) and industrial sector (contributing 7.9%, 7.9%, 17.66%, 20.25%, 1231%, 11.38% and 22.41%) of the total emissions in Delhi, Greater Mumbai, Kolkata, Chennai, Greater Bangalore, Hyderabad and Ahmedabad, respectively. Chennai emits 4.79 t of CO2 equivalent emissions per capita, the highest among all the cities followed by Kolkata which emits 3.29 t of CO2 equivalent emissions per capita. Also Chennai emits the highest CO2 equivalent emissions per GDP (2.55 t CO2 eq./Lakh Rs.) followed by Greater Bangalore which emits 2.18 t CO2 eq./Lakh Rs. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a new state transition based embedding (STBE) technique for audio watermarking with high fidelity. Furthermore, we propose a new correlation based encoding (CBE) scheme for binary logo image in order to enhance the payload capacity. The result of CBE is also compared with standard run-length encoding (RLE) compression and Huffman schemes. Most of the watermarking algorithms are based on modulating selected transform domain feature of an audio segment in order to embed given watermark bit. In the proposed STBE method instead of modulating feature of each and every segment to embed data, our aim is to retain the default value of this feature for most of the segments. Thus, a high quality of watermarked audio is maintained. Here, the difference between the mean values (Mdiff) of insignificant complex cepstrum transform (CCT) coefficients of down-sampled subsets is selected as a robust feature for embedding. Mdiff values of the frames are changed only when certain conditions are met. Hence, almost 50% of the times, segments are not changed and still STBE can convey watermark information at receiver side. STBE also exhibits a partial restoration feature by which the watermarked audio can be restored partially after extraction of the watermark at detector side. The psychoacoustic model analysis showed that the noise-masking ratio (NMR) of our system is less than -10dB. As amplitude scaling in time domain does not affect selected insignificant CCT coefficients, strong invariance towards amplitude scaling attacks is also proved theoretically. Experimental results reveal that the proposed watermarking scheme maintains high audio quality and are simultaneously robust to general attacks like MP3 compression, amplitude scaling, additive noise, re-quantization, etc.