811 resultados para Result oriented management
Resumo:
Many large mammals such as elephant, rhino and tiger often come into conflict with people by destroying agricultural crops and even killing people, thus providing a deterrent to conservation efforts. The males of these polygynous species have a greater variance in reproductive success than females, leading to selection pressures favouring a ‘high risk-high gain’ strategy for promoting reproductive success. This brings them into greater conflict with people. For instance, adult male elephants are far more prone than a member of a female-led family herd to raid agricultural crops and to kill people. In polygynous species, the removal of a certain proportion of ‘surplus’ adult males is not likely to affect the fertility and growth rate of the population. Hence, this could be a management tool which would effectively reduce animal-human conflict, and at the same time maintain the viability of the population. Selective removal of males would result in a skewed sex ratio. This would reduce the ‘effective population size’ (as opposed to the total population or census number), increase the rate of genetic drift and, in small populations, lead to inbreeding depression. Plans for managing destructive mammals through the culling of males will have to ensure that the appropriate minimum size in the populations is being maintained.
Resumo:
The aim of the thesis is to assess the fishery of Baltic cod, herring and sprat by simulation over 50 years time period. We form a bioeconomic multispecies model for the species. We include species interactions into the model because especially cod and sprat stocks have significant effects on each other. We model the development of population dynamics, catches and profits of the fishery with current fishing mortalities, as well as with the optimal profit maximizing fishing mortalities. Thus, we see how the fishery would develop with current mortalities, and how the fishery should be developed in order to yield maximal profits. Especially cod stock has been quite low recently and by optimizing the fishing mortality it could get recovered. In addition, we assess what would happen to the fisheries of the species if more favourable environmental conditions for cod recruitment dominate in the Baltic Sea. The results may yield new information for the fisheries management. According to the results the fishery of Baltic cod, herring and sprat are not at the most profitable level. The fishing mortalities of each species should be lower in order to maximize the profits. By fishing mortality optimizing the net present value would be almost three times higher in the simulation period. The lower fishing mortality of cod would result in a cod stock recovery. If the environmental conditions in the Baltic Sea improved, cod stock would recover even without a decrease in the fishing mortality. Then the increased cod stock would restrict herring and sprat stock remarkably, and harvesting of these species would not be as profitable anymore.
Resumo:
We provide a comparative performance evaluation of packet queuing and link admission strategies for low-speed wide area network Links (e.g. 9600 bps, 64 kbps) that interconnect relatively highspeed, connectionless local area networks (e.g. 10 Mbps). In particular, we are concerned with the problem of providing differential quality of service to interLAN remote terminal and file transfer sessions, and throughput fairness between interLAN file transfer sessions. We use analytical and simulation models to study a variety of strategies. Our work also serves to address the performance comparison of connectionless vs. connection-oriented interconnection of CLNS LANS. When provision of priority at the physical transmission level is not feasible, we show, for low-speed WAN links (e.g. 9600 bps), the superiority of connection-oriented interconnection of connectionless LANs, with segregation of traffic streams with different QoS requirements into different window flow controlled connections. Such an implementation can easily be obtained by transporting IP packets over an X.25 WAN. For 64 kbps WAN links, there is a drop in file transfer throughputs, owing to connection overheads, but the other advantages are retained, The same solution also helps to provide throughput fairness between interLAN file transfer sessions. We also provide a corroboration of some of our modelling results with results from an experimental test-bed.
Resumo:
Weak molecular interactions such as those in pyridine-iodine, benzene-iodine and benzene-chloroform systems oriented in thermotropic liquid crystals have been studied from the changes of the order parameters as a result of complex formation. The results indicate the formation of at least two types of charge transfer complexes in pyridine-iodine solutions. The pi-complexes in benzene-chloroform and benzene-iodine mixtures have also been detected. No detectable changes in the inter-proton distances in these systems were observed.
Resumo:
Digest caches have been proposed as an effective method tospeed up packet classification in network processors. In this paper, weshow that the presence of a large number of small flows and a few largeflows in the Internet has an adverse impact on the performance of thesedigest caches. In the Internet, a few large flows transfer a majority ofthe packets whereas the contribution of several small flows to the totalnumber of packets transferred is small. In such a scenario, the LRUcache replacement policy, which gives maximum priority to the mostrecently accessed digest, tends to evict digests belonging to the few largeflows. We propose a new cache management algorithm called SaturatingPriority (SP) which aims at improving the performance of digest cachesin network processors by exploiting the disparity between the number offlows and the number of packets transferred. Our experimental resultsdemonstrate that SP performs better than the widely used LRU cachereplacement policy in size constrained caches. Further, we characterizethe misses experienced by flow identifiers in digest caches.
Resumo:
NMR spectra of molecules oriented in liquid-crystalline matrix provide information on the structure and orientation of the molecules. Thermotropic liquid crystals used as an orienting media result in the spectra of spins that are generally strongly coupled. The number of allowed transitions increases rapidly with the increase in the number of interacting spins. Furthermore, the number of single quantum transitions required for analysis is highly redundant. In the present study, we have demonstrated that it is possible to separate the subspectra of a homonuclear dipolar coupled spin system on the basis of the spin states of the coupled heteronuclei by multiple quantum (MQ)−single quantum (SQ) correlation experiments. This significantly reduces the number of redundant transitions, thereby simplifying the analysis of the complex spectrum. The methodology has been demonstrated on the doubly 13C labeled acetonitrile aligned in the liquid-crystal matrix and has been applied to analyze the complex spectrum of an oriented six spin system.
Resumo:
Ad hoc networks are being used in applications ranging from disaster recovery to distributed collaborative entertainment applications. Ad hoc networks have become one of the most attractive solution for rapid deployment of interconnecting large number of mobile personal devices. The user community of mobile personal devices are demanding a variety of value added multimedia entertainment services. The popularity of peer group is increasing and one or some members of the peer group need to send data to some or all members of the peer group. The increasing demand for group oriented value added services is driving for efficient multicast service over ad hoc networks. Access control mechanisms need to be deployed to provide guarantee that the unauthorized users cannot access the multicast content. In this paper, we present a topology aware key management and distribution scheme for secure overlay multicast over MANET to address node mobility related issues for multicast key management. We use overlay approach for key distribution and our objective is to keep communication overhead low for key management and distribution. We also incorporate reliability using explicit acknowledgments with the key distribution scheme. Through simulations we show that the proposed key management scheme has low communication overhead for rekeying and improves the reliability of key distribution.
Resumo:
Aquatic Ecosystems perform numerous valuable environmental functions. They recycle nutrients, purify water, recharge ground water, augment and maintain stream flow, and provide habitat for a wide variety of flora and fauna and recreation for people. A rapid population increase accompanied by unplanned developmental works has led to the pollution of surface waters due to residential, agricultural, commercial and industrial wastes/effluents and decline in the number of water bodies. Increased demands for drainage of wetlands have been accommodated by channelisation, resulting in further loss of stream habitat, which has led to aquatic organisms becoming extinct or imperiled in increasing numbers and to the impairment of many beneficial uses of water, including drinking, swimming and fishing. Various anthropogenic activities have altered the physical, chemical and biological processes within aquatic ecosystems. An integrated and accelerated effort toward environmental restoration and preservation is needed to stop further degradation of these fragile ecosystems. Failure to restore these ecosystems will result in sharply increased environmental costs later, in the extinction of species or ecosystem types, and in permanent ecological damage.
Resumo:
Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.
Resumo:
In this paper we present the design of ``e-SURAKSHAK,'' a novel cyber-physical health care management system of Wireless Embedded Internet Devices (WEIDs) that sense vital health parameters. The system is capable of sensing body temperature, heart rate, oxygen saturation level and also allows noninvasive blood pressure (NIBP) measurement. End to end internet connectivity is provided by using 6LoWPAN based wireless network that uses the 802.15.4 radio. A service oriented architecture (SOA) 1] is implemented to extract meaningful information and present it in an easy-to-understand form to the end-user instead of raw data made available by sensors. A central electronic database and health care management software are developed. Vital health parameters are measured and stored periodically in the database. Further, support for real-time measurement of health parameters is provided through a web based GUI. The system has been implemented completely and demonstrated with multiple users and multiple WEIDs.
Resumo:
Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.
Resumo:
In this work, it is demonstrated that the in situ growth of oriented nanometric aggregates of partially inverted zinc ferrite can potentially pave a way to alter and tune magnetocrystalline anisotropy that, in turn, dictates ferromagnetic resonance frequency (f(FMR)) by inducing strain due to aggregation. Furthermore, the influence of interparticle interaction on magnetic properties of the aggregates is investigated. Mono-dispersed zinc ferrite nanoparticles (<5 nm) with various degrees of aggregation were prepared through decomposition of metal-organic compounds of zinc (II) and iron (III) in an alcoholic solution under controlled microwave irradiation, below 200 degrees C. The nanocrystallites were found to possess high degree of inversion (>0.5). With increasing order of aggregation in the samples, saturation magnetization (at 5 K) is found to decrease from 38 emu/g to 24 emu/g, while coercivity is found to increase gradually by up to 100% (525 Oe to 1040 Oe). Anisotropy-mediated shift of f(FMR) has also been measured and discussed. In essence, the result exhibits an easy way to control the magnetic characteristics of nanocrystalline zinc ferrite, boosted with significant degree of inversion, at GHz frequencies. (C) 2015 AIP Publishing LLC.
Resumo:
The feasibility of using protein A to immobilize antibody on silicon surface for a biosensor with imaging ellipsometry was presented in this study. The amount of human IgG bound with anti-IgG immobilized by the protein A on silicon surface was much more than that bound with anti-IgG immobilized by physical adsorption. The result indicated that the protein A could be used to immobilize antibody molecules in a highly oriented manner and maintain antibody molecular functional configuration on the silicon surface. High reproducibility of the amount of antibody immobilization and homogenous antibody adsorption layer on surfaces could be obtained by this immobilization method. Imaging ellipsometry has been proven to be a fast and reliable detection method and sensitive enough to detect small changes in a molecular monolayer level. The combination of imaging ellipsometry and surface modification with protein A has the potential to be further developed into an efficient immunoassay protein chip.
Resumo:
A brief analysis is presented of how heat transfer takes place in porous materials of various types. The emphasis is on materials able to withstand extremes of temperature, gas pressure, irradiation, etc., i.e. metals and ceramics, rather than polymers. A primary aim is commonly to maximize either the thermal resistance (i.e. provide insulation) or the rate of thermal equilibration between the material and a fluid passing through it (i.e. to facilitate heat exchange). The main structural characteristics concern porosity (void content), anisotropy, pore connectivity and scale. The effect of scale is complex, since the permeability decreases as the structure is refined, but the interfacial area for fluid-solid heat exchange is, thereby, raised. The durability of the pore structure may also be an issue, with a possible disadvantage of finer scale structures being poor microstructural stability under service conditions. Finally, good mechanical properties may be required, since the development of thermal gradients, high fluid fluxes, etc. can generate substantial levels of stress. There are, thus, some complex interplays between service conditions, pore architecture/scale, fluid permeation characteristics, convective heat flow, thermal conduction and radiative heat transfer. Such interplays are illustrated with reference to three examples: (i) a thermal barrier coating in a gas turbine engine; (ii) a Space Shuttle tile; and (iii) a Stirling engine heat exchanger. Highly porous, permeable materials are often made by bonding fibres together into a network structure and much of the analysis presented here is oriented towards such materials. © 2005 The Royal Society.
Resumo:
This research aims to develop a conceptual framework in order to enquire into the dynamic growth process of University Spin-outs (hereafter referred to as USOs) in China, attempting to understand the capabilities configuration that are necessary for the dynamic growth. Based on the extant literature and empirical cases, this study attempts to address the question how do USOs in China build and configure the innovative capabilities to cope with the dynamic growth. This paper aims to contribute to the existing literature by providing a theoretical discussion of the USOs' dynamic entrepreneurial process, by investigating the interconnections between innovation problem-solving and the required configuration of innovative capabilities in four growth phases. Further, it presents a particular interest on the impact to the USOs' entrepreneurial innovation process by the integrative capabilities, in terms of knowledge integration, alliance, venture finance and venture governance. To date, studies that have investigated the dynamic development process of USOs in China and have recognized the heterogeneity of USOs in terms of capabilities that are required for rapid growth still remain sparse. Addressing this research gap will be of great interest to entrepreneurs, policy makers, and venture investors. ©2009 IEEE.