11 resultados para Dearly, Max (1874-1943)
em Indian Institute of Science - Bangalore - Índia
Resumo:
We have observed the exchange spring behavior in the soft (Fe3O4)-hard (BaCa2Fe16O27)-ferrite composite by tailoring the particle size of the individual phases and by suitable thermal treatment of the composite. The magnetization curve for the nanocomposite heated at 800 degrees C shows a single loop hysteresis showing the existence of the exchange spring phenomena in the composite and an enhancement of 13% in (BH)(max) compared to the parent hard ferrite (BaCa2Fe16O27). The Henkel plot provides the proof of the presence of the exchange interaction between the soft and hard grains as well as its dominance over the dipolar interaction in the nanocomposite.
Resumo:
The max-coloring problem is to compute a legal coloring of the vertices of a graph G = (V, E) with a non-negative weight function w on V such that Sigma(k)(i=1) max(v epsilon Ci) w(v(i)) is minimized, where C-1, ... , C-k are the various color classes. Max-coloring general graphs is as hard as the classical vertex coloring problem, a special case where vertices have unit weight. In fact, in some cases it can even be harder: for example, no polynomial time algorithm is known for max-coloring trees. In this paper we consider the problem of max-coloring paths and its generalization, max-coloring abroad class of trees and show it can be solved in time O(vertical bar V vertical bar+time for sorting the vertex weights). When vertex weights belong to R, we show a matching lower bound of Omega(vertical bar V vertical bar log vertical bar V vertical bar) in the algebraic computation tree model.
Resumo:
In the distributed storage setting introduced by Dimakis et al., B units of data are stored across n nodes in the network in such a way that the data can be recovered by connecting to any k nodes. Additionally one can repair a failed node by connecting to any d nodes while downloading at most beta units of data from each node. In this paper, we introduce a flexible framework in which the data can be recovered by connecting to any number of nodes as long as the total amount of data downloaded is at least B. Similarly, regeneration of a failed node is possible if the new node connects to the network using links whose individual capacity is bounded above by beta(max) and whose sum capacity equals or exceeds a predetermined parameter gamma. In this flexible setting, we obtain the cut-set lower bound on the repair bandwidth along with a constructive proof for the existence of codes meeting this bound for all values of the parameters. An explicit code construction is provided which is optimal in certain parameter regimes.
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.
Resumo:
The objective of the paper is to estimate Safe Shutdown Earthquake (SSE) and Operating/Design Basis Earthquake (OBE/DBE) for the Nuclear Power Plant (NPP) site located at Kalpakkam, Tamil Nadu, India. The NPP is located at 12.558 degrees N, 80.175 degrees E and a 500 km circular area around NPP site is considered as `seismic study area' based on past regional earthquake damage distribution. The geology, seismicity and seismotectonics of the study area are studied and the seismotectonic map is prepared showing the seismic sources and the past earthquakes. Earthquake data gathered from many literatures are homogenized and declustered to form a complete earthquake catalogue for the seismic study area. The conventional maximum magnitude of each source is estimated considering the maximum observed magnitude (M-max(obs)) and/or the addition of 0.3 to 0.5 to M-max(obs). In this study maximum earthquake magnitude has been estimated by establishing a region's rupture character based on source length and associated M-max(obs). A final source-specific M-max is selected from the three M-max values by following the logical criteria. To estimate hazard at the NPP site, ten Ground-Motion Prediction Equations (GMPEs) valid for the study area are considered. These GMPEs are ranked based on Log-Likelihood (LLH) values. Top five GMPEs are considered to estimate the peak ground acceleration (PGA) for the site. Maximum PGA is obtained from three faults and named as vulnerable sources to decide the magnitudes of OBE and SSE. The average and normalized site specific response spectrum is prepared considering three vulnerable sources and further used to establish site-specific design spectrum at NPP site.
Resumo:
A series of spectral analyses of surface waves (SASW) tests were conducted on a cement concrete pavement by dropping steel balls of four different values of diameter (D) varying between 25.4 and 76.2 mm. These tests were performed (1) by using different combinations of source to nearest receiver distance (S) and receiver spacing (X), and (2) for two different heights (H) of fall, namely, 0.25 and 0.50 m. The values of the maximum wavelength (lambda(max)) and minimum wavelength (lambda(min)) associated with the combined dispersion curve, corresponding to a particular combination of D and H, were noted to increase almost linearly with an increase in the magnitude of the input source energy (E). A continuous increase in strength and duration of the signals was noted to occur with an increase in the magnitude of D. Based on statistical analysis, two regression equations have been proposed to determine lambda(max) and lambda(min) for different values of source energy. It is concluded that the SASW technique is capable of producing nearly a unique dispersion curve irrespective of (1) diameters and heights of fall of the dropping masses used for producing the vibration, and (2) the spacing between different receivers. The results presented in this paper can be used to provide guidelines for deciding about the input source energy based on the required exploration zone of the pavement. (C) 2014 American Society of Civil Engineers.
Resumo:
We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.
Quick, Decentralized, Energy-Efficient One-Shot Max Function Computation Using Timer-Based Selection
Resumo:
In several wireless sensor networks, it is of interest to determine the maximum of the sensor readings and identify the sensor responsible for it. We propose a novel, decentralized, scalable, energy-efficient, timer-based, one-shot max function computation (TMC) algorithm. In it, the sensor nodes do not transmit their readings in a centrally pre-defined sequence. Instead, the nodes are grouped into clusters, and computation occurs over two contention stages. First, the nodes in each cluster contend with each other using the timer scheme to transmit their reading to their cluster-heads. Thereafter, the cluster-heads use the timer scheme to transmit the highest sensor reading in their cluster to the fusion node. One new challenge is that the use of the timer scheme leads to collisions, which can make the algorithm fail. We optimize the algorithm to minimize the average time required to determine the maximum subject to a constraint on the probability that it fails to find the maximum. TMC significantly lowers average function computation time, average number of transmissions, and average energy consumption compared to approaches proposed in the literature.
Resumo:
The colubrid snake Chrysopelea taprobanica Smith, 1943 was described from a holotype from Kanthali (= Kantalai) and paratypes from Kurunegala, both localities in Sri Lanka (formerly Ceylon) (Smith 1943). Since its description, literature pertaining to Sri Lankan snake fauna considered this taxon to be endemic to the island (Taylor 1950, Deraniyagala 1955, de Silva 1980, de Silva 1990, Somaweera 2004, Somaweera 2006, de Silva 2009, Pyron et al. 2013). In addition, earlier efforts on the Indian peninsula (e.g. Das 1994, 1997, Das 2003, Whitaker & Captain 2004, Aengals et al. 2012) and global data compilations (e.g. Wallach et al. 2014, Uetz & Hošek 2015) did not identify any record from mainland India until Guptha et al. (2015) recorded a specimen (voucher BLT 076 housed at Bio-Lab of Seshachalam Hills, Tirupathi, India) in the dry deciduous forest of Chamala, Seshachalam Biosphere Reserve in Andhra Pradesh, India in November 2013. Guptha et al. (2015) further mentioned an individual previously photographed in 2000 at Rishi Valley, Andhra Pradesh, but with no voucher specimen collected. Guptha’s record, assumed to be the first confirmed record of C. taprobanica in India, is noteworthy as it results in a large range extension, from northern Sri Lanka to eastern India with an Euclidean distance of over 400 km, as well as a change of status, i.e., species not endemic to Sri Lanka. However, at least three little-known previous records of this species from India evaded most literature and were overlooked by the researchers including ourselves.