893 resultados para running reward
Resumo:
Electronic, magnetic, and structural properties of graphene flakes depend sensitively upon the type of edge atoms. We present a simple software tool for determining the type of edge atoms in a honeycomb lattice. The algorithm is based on nearest neighbor counting. Whether an edge atom is of armchair or zigzag type is decided by the unique pattern of its nearest neighbors. Particular attention is paid to the practical aspects of using the tool, as additional features such as extracting out the edges from the lattice could help in analyzing images from transmission microscopy or other experimental probes. Ultimately, the tool in combination with density-functional theory or tight-binding method can also be helpful in correlating the properties of graphene flakes with the different armchair-to-zigzag ratios. Program summary Program title: edgecount Catalogue identifier: AEIA_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEIA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 66685 No. of bytes in distributed program, including test data, etc.: 485 381 Distribution format: tar.gz Programming language: FORTRAN 90/95 Computer: Most UNIX-based platforms Operating system: Linux, Mac OS Classification: 16.1, 7.8 Nature of problem: Detection and classification of edge atoms in a finite patch of honeycomb lattice. Solution method: Build nearest neighbor (NN) list; assign types to edge atoms on the basis of their NN pattern. Running time: Typically similar to second(s) for all examples. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The performance of a program will ultimately be limited by its serial (scalar) portion, as pointed out by Amdahl′s Law. Reported studies thus far of instruction-level parallelism have mixed data-parallel program portions with scalar program portions, often leading to contradictory and controversial results. We report an instruction-level behavioral characterization of scalar code containing minimal data-parallelism, extracted from highly vectorized programs of the PERFECT benchmark suite running on a Cray Y-MP system. We classify scalar basic blocks according to their instruction mix, characterize the data dependencies seen in each class, and, as a first step, measure the maximum intrablock instruction-level parallelism available. We observe skewed rather than balanced instruction distributions in scalar code and in individual basic block classes of scalar code; nonuniform distribution of parallelism across instruction classes; and, as expected, limited available intrablock parallelism. We identify frequently occurring data-dependence patterns and discuss new instructions to reduce latency. Toward effective scalar hardware, we study latency-pipelining trade-offs and restricted multiple instruction issue mechanisms.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected non-negative edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. Although in most such applications any cycle basis can be used, a low weight cycle basis often translates to better performance and/or numerical stability. Despite the fact that the problem can be solved exactly in polynomial time, we design approximation algorithms since the performance of the exact algorithms may be too expensive for some practical applications. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time O(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time O(n(3+2/k) ), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega) ) bound. We also present a 2-approximation algorithm with expected running time O(M-omega root n log n), a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.
Resumo:
A performance prediction model generally applicable for volute-type centrifugal pumps has been extended to predict the dynamic characteristics of a pump during its normal starting and stopping periods. Experiments have been conducted on a volute pump with different valve openings to study the dynamic behaviour of the pump during normal start-up and stopping, when a small length of discharge pipeline is connected to the discharge flange of the pump. Such experiments have also been conducted when the test pump was part of a hydraulic system, an experimental rig, where it is pumping against three similar pumps, known as supply pumps, connected in series, with the supply pumps kept idle or running. Instantaneous rotational speed, flowrate, and delivery and suction pressures of the pump were recorded and it was observed in all the tested cases that the change of pump behaviour during the transient period was quasi-steady, which validates the quasi-steady approach presented in this paper. The nature of variation of parameters during the transients has been discussed. The model-predicted dynamic head-capacity curves agree well with the experimental data for almost all the tested cases.
Resumo:
We present a frontier based algorithm for searching multiple goals in a fully unknown environment, with only information about the regions where the goals are most likely to be located. Our algorithm chooses an ``active goal'' from the ``active goal list'' generated by running a Traveling Salesman Problem (Tsp) routine with the given centroid locations of the goal regions. We use the concept of ``goal switching'' which helps not only in reaching more number of goals in given time, but also prevents unnecessary search around the goals that are not accessible (surrounded by walls). The simulation study shows that our algorithm outperforms Multi-Heuristic LRTA* (MELRTA*) which is a significant representative of multiple goal search approaches in an unknown environment, especially in environments with wall like obstacles.
Resumo:
The authors present the simulation of the tropical Pacific surface wind variability by a low-resolution (R15 horizontal resolution and 18 vertical levels) version of the Center for Ocean-Land-Atmosphere Interactions, Maryland, general circulation model (GCM) when forced by observed global sea surface temperature. The authors have examined the monthly mean surface winds acid precipitation simulated by the model that was integrated from January 1979 to March 1992. Analyses of the climatological annual cycle and interannual variability over the Pacific are presented. The annual means of the simulated zonal and meridional winds agree well with observations. The only appreciable difference is in the region of strong trade winds where the simulated zonal winds are about 15%-20% weaker than observed, The amplitude of the annual harmonics are weaker than observed over the intertropical convergence zone and the South Pacific convergence zone regions. The amplitudes of the interannual variation of the simulated zonal and meridional winds are close to those of the observed variation. The first few dominant empirical orthogonal functions (EOF) of the simulated, as well as the observed, monthly mean winds are found to contain a targe amount of high-frequency intraseasonal variations, While the statistical properties of the high-frequency modes, such as their amplitude and geographical locations, agree with observations, their detailed time evolution does not. When the data are subjected to a 5-month running-mean filter, the first two dominant EOFs of the simulated winds representing the low-frequency EI Nino-Southern Oscillation fluctuations compare quite well with observations. However, the location of the center of the westerly anomalies associated with the warm episodes is simulated about 15 degrees west of the observed locations. The model simulates well the progress of the westerly anomalies toward the eastern Pacific during the evolution of a warm event. The simulated equatorial wind anomalies are comparable in magnitude to the observed anomalies. An intercomparison of the simulation of the interannual variability by a few other GCMs with comparable resolution is also presented. The success in simulation of the large-scale low-frequency part of the tropical surface winds by the atmospheric GCM seems to be related to the model's ability to simulate the large-scale low-frequency part of the precipitation. Good correspondence between the simulated precipitation and the highly reflective cloud anomalies is seen in the first two EOFs of the 5-month running means. Moreover, the strong correlation found between the simulated precipitation and the simulated winds in the first two principal components indicates the primary role of model precipitation in driving the surface winds. The surface winds simulated by a linear model forced by the GCM-simulated precipitation show good resemblance to the GCM-simulated winds in the equatorial region. This result supports the recent findings that the large-scale part of the tropical surface winds is primarily linear.
Resumo:
The rhesus monkey Macaca mulatta and Hanuman langur Presbytis entellus are distributed all over the State of Himachal Pradesh, India. Although both species inhabit forested areas, only rhesus monkeys seem also to have become urbanized. There are about 200,000 rhesus monkeys and 120,000 Hanuman langurs. A three-year survey at Shimla showed an increasing trend in their populations. Potential threats to survival of these primates differ in the 12 districts. The two species differ in feeding and habitat preferences. People's feelings, perceptions and attitudes reward them point to an incipient man-monkey conflict and erosion of conservation ethics. A comprehensive management plan for these primates should be formulated, and involve local people. Copyright (C) 1996 Elsevier Science Limited
Resumo:
A Wireless Sensor Network (WSN) powered using harvested energies is limited in its operation by instantaneous power. Since energy availability can be different across nodes in the network, network setup and collaboration is a non trivial task. At the same time, in the event of excess energy, exciting node collaboration possibilities exist; often not feasible with battery driven sensor networks. Operations such as sensing, computation, storage and communication are required to achieve the common goal for any sensor network. In this paper, we design and implement a smart application that uses a Decision Engine, and morphs itself into an energy matched application. The results are based on measurements using IRIS motes running on solar energy. We have done away with batteries; instead used low leakage super capacitors to store harvested energy. The Decision Engine utilizes two pieces of data to provide its recommendations. Firstly, a history based energy prediction model assists the engine with information about in-coming energy. The second input is the energy cost database for operations. The energy driven Decision Engine calculates the energy budgets and recommends the best possible set of operations. Under excess energy condition, the Decision Engine, promiscuously sniffs the neighborhood looking for all possible data from neighbors. This data includes neighbor's energy level and sensor data. Equipped with this data, nodes establish detailed data correlation and thus enhance collaboration such as filling up data gaps on behalf of nodes hibernating under low energy conditions. The results are encouraging. Node and network life time of the sensor nodes running the smart application is found to be significantly higher compared to the base application.
Resumo:
With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.
Resumo:
A method for the preparation of acicular hydrogoethite (alpha -FeOOH.xH(2)O, 0.1 < x < 0.22) particles of 0.3-1 mm length has been optimized by air oxidation of Fe( II) hydroxide gel precipitated from aqueous (NH4)(2)Fe(SO4)(2) solutions containing 0.005-0.02 atom% of cationic Pt, Pd or Rh additives as morphology controlling agents. Hydrogoethite particles are evolved from the amorphous ferrous hydroxide gel by heterogeneous nucleation and growth. Preferential adsorption of additives on certain crystallographic planes thereby retarding the growth in the perpendicular direction, allows the particles to acquire acicular shapes with high aspect ratios of 8-15. Synthetic hydrogoethite showed a mass loss of about 14% at similar to 280 degreesC, revealing the presence of strongly coordinated water of hydration in the interior of the goethite crystallites. As evident from IR spectra, excess H2O molecules (0.1- 0.22 per formula unit) are located in the strands of channels formed in between the double ribbons of FeO6 octahedra running parallel to the c- axis. Hydrogoethite particles constituted of multicrystallites are formed with Pt as additive, whereas single crystallite particles are obtained with Pd (or Rh). For both dehydroxylation as well as H-2 reduction, a lower reaction temperature (similar to 220 degreesC) was observed for the former (Pt treated) compared to the latter (Pd or Rh) (similar to 260 degreesC). Acicular magnetite (Fe3O4) was prepared either by reducing hydrogoethite (magnetite route) or dehydroxylating hydrogoethite to hematite and then reducing it to magnetite (hematite- magnetite route). According to TEM studies, preferential dehydroxylation of hydrogoethite along < 010 > leads to microporous hematite. Maghemite (gamma -Fe2O3 (-) (delta), 0 <
Resumo:
Superoxide dismutase has been discovered within the periplasm of several Gram-negative pathogens. We studied the Cu,Zn-SOD enzyme in Escherichia coli isolated from clinical samples (stool samples) collected from patients suffering from diarrhea. Antibiogram studies of the isolates were carried out to determine the sensitive and resistant strains. The metal co-factor present in the enzyme was confirmed by running samples in native gels and inhibiting with 2 mM potassium cyanide. A 519 bp sodC gene was amplified from resistant and sensitive strains of Escherichia coli. Cloning and sequencing of the sodC gene indicated variation in the protein and amino acid sequences of sensitive and resistant isolates. The presence of sodC in highly resistant Escherichia coli isolates from diarrheal patients indicates that sodC may play role in enhancing the pathogenicity by protecting cells from exogenous sources of superoxide, such as the oxidative burst of phagocytes. The presence of SodC could be one of the factors for bacterial virulence.
Resumo:
Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems.A sensor network operating system is a kind of embedded operating system, but unlike a typical embedded operating system, sensor network operatin g system may not be real time, and is constrained by memory and energy constraints. Most sensor network operating systems are based on event-driven approach. Event-driven approach is efficient in terms of time and space.Also this approach does not require a separate stack for each execution context. But using this model, it is difficult to implement long running tasks, like cryptographic operations. A thread based computation requires a separate stack for each execution context, and is less efficient in terms of time and space. In this paper, we propose a thread based execution model that uses only a fixed number of stacks. In this execution model, the number of stacks at each priority level are fixed. It minimizes the stack requirement for multi-threading environment and at the same time provides ease of programming. We give an implementation of this model in Contiki OS by separating thread implementation from protothread implementation completely. We have tested our OS by implementing a clock synchronization protocol using it.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
As computational Grids are increasingly used for executing long running multi-phase parallel applications, it is important to develop efficient rescheduling frameworks that adapt application execution in response to resource and application dynamics. In this paper, three strategies or algorithms have been developed for deciding when and where to reschedule parallel applications that execute on multi-cluster Grids. The algorithms derive rescheduling plans that consist of potential points in application execution for rescheduling and schedules of resources for application execution between two consecutive rescheduling points. Using large number of simulations, it is shown that the rescheduling plans developed by the algorithms can lead to large decrease in application execution times when compared to executions without rescheduling on dynamic Grid resources. The rescheduling plans generated by the algorithms are also shown to be competitive when compared to the near-optimal plans generated by brute-force methods. Of the algorithms, genetic algorithm yielded the most efficient rescheduling plans with 9-12% smaller average execution times than the other algorithms.