981 resultados para resource competition
Resumo:
http://digilib.bu.edu/archive/strangerthanfict00halcrich/strangerthanfict00halcrich.djvu
Resumo:
BACKGROUND:The Framingham Heart Study (FHS), founded in 1948 to examine the epidemiology of cardiovascular disease, is among the most comprehensively characterized multi-generational studies in the world. Many collected phenotypes have substantial genetic contributors; yet most genetic determinants remain to be identified. Using single nucleotide polymorphisms (SNPs) from a 100K genome-wide scan, we examine the associations of common polymorphisms with phenotypic variation in this community-based cohort and provide a full-disclosure, web-based resource of results for future replication studies.METHODS:Adult participants (n = 1345) of the largest 310 pedigrees in the FHS, many biologically related, were genotyped with the 100K Affymetrix GeneChip. These genotypes were used to assess their contribution to 987 phenotypes collected in FHS over 56 years of follow up, including: cardiovascular risk factors and biomarkers; subclinical and clinical cardiovascular disease; cancer and longevity traits; and traits in pulmonary, sleep, neurology, renal, and bone domains. We conducted genome-wide variance components linkage and population-based and family-based association tests.RESULTS:The participants were white of European descent and from the FHS Original and Offspring Cohorts (examination 1 Offspring mean age 32 +/- 9 years, 54% women). This overview summarizes the methods, selected findings and limitations of the results presented in the accompanying series of 17 manuscripts. The presented association results are based on 70,897 autosomal SNPs meeting the following criteria: minor allele frequency [greater than or equal to] 10%, genotype call rate [greater than or equal to] 80%, Hardy-Weinberg equilibrium p-value [greater than or equal to] 0.001, and satisfying Mendelian consistency. Linkage analyses are based on 11,200 SNPs and short-tandem repeats. Results of phenotype-genotype linkages and associations for all autosomal SNPs are posted on the NCBI dbGaP website at http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?id=phs000007.CONCLUSION:We have created a full-disclosure resource of results, posted on the dbGaP website, from a genome-wide association study in the FHS. Because we used three analytical approaches to examine the association and linkage of 987 phenotypes with thousands of SNPs, our results must be considered hypothesis-generating and need to be replicated. Results from the FHS 100K project with NCBI web posting provides a resource for investigators to identify high priority findings for replication.
Resumo:
Resource Allocation Problems (RAPs) are concerned with the optimal allocation of resources to tasks. Problems in fields such as search theory, statistics, finance, economics, logistics, sensor & wireless networks fit this formulation. In literature, several centralized/synchronous algorithms have been proposed including recently proposed auction algorithm, RAP Auction. Here we present asynchronous implementation of RAP Auction for distributed RAPs.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited devices to query such information at remote locations in the field in which they collectively roam. One approach to implement such a service is directed placement and retrieval (DPR), whereby readings/queries about a specific location are routed to a node responsible for that location. In a mobile, potentially sparse setting, where end-to-end paths are unavailable, DPR is not an attractive solution as it would require the use of delay-tolerant (flooding-based store-carry-forward) routing of both readings and queries, which is inappropriate for applications with data freshness constraints, and which is incompatible with stringent device power/memory constraints. Alternatively, we propose the use of amorphous placement and retrieval (APR), in which routing and field monitoring are integrated through the use of a cache management scheme coupled with an informed exchange of cached samples to diffuse sensory data throughout the network, in such a way that a query answer is likely to be found close to the query origin. We argue that knowledge of the distribution of query targets could be used effectively by an informed cache management policy to maximize the utility of collective storage of all devices. Using a simple analytical model, we show that the use of informed cache management is particularly important when the mobility model results in a non-uniform distribution of users over the field. We present results from extensive simulations which show that in sparsely-connected networks, APR is more cost-effective than DPR, that it provides extra resilience to node failure and packet losses, and that its use of informed cache management yields superior performance.
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.
Resumo:
The concept of attention has been used in many senses, often without clarifying how or why attention works as it does. Attention, like consciousness, is often described in a disembodied way. The present article summarizes neural models and supportive data and how attention is linked to processes of learning, expectation, competition, and consciousness. A key them is that attention modulates cortical self-organization and stability. Perceptual and cognitive neocortex is organized into six main cell layers, with characteristic sub-lamina. Attention is part of unified design of bottom-up, horizontal, and top-down interactions among indentified cells in laminar cortical circuits. Neural models clarify how attention may be allocated during processes of visual perception, learning and search; auditory streaming and speech perception; movement target selection during sensory-motor control; mental imagery and fantasy; and hallucination during mental disorders, among other processes.
Resumo:
The wave energy industry is progressing towards an advanced stage of development, with consideration being given to the selection of suitable sites for the first commercial installations. An informed, and accurate, characterisation of the wave energy resource is an essential aspect of this process. Ireland is exposed to an energetic wave climate, however many features of this resource are not well understood. This thesis assesses and characterises the wave energy resource that has been measured and modelled at the Atlantic Marine Energy Test Site, a facility for conducting sea trials of floating wave energy converters that is being developed near Belmullet, on the west coast of Ireland. This characterisation process is undertaken through the analysis of metocean datasets that have previously been unavailable for exposed Irish sites. A number of commonly made assumptions in the calculation of wave power are contested, and the uncertainties resulting from their application are demonstrated. The relationship between commonly used wave period parameters is studied, and its importance in the calculation of wave power quantified, while it is also shown that a disconnect exists between the sea states which occur most frequently at the site and those that contribute most to the incident wave energy. Additionally, observations of the extreme wave conditions that have occurred at the site and estimates of future storms that devices will need to withstand are presented. The implications of these results for the design and operation of wave energy converters are discussed. The foremost contribution of this thesis is the development of an enhanced understanding of the fundamental nature of the wave energy resource at the Atlantic Marine Energy Test Site. The results presented here also have a wider relevance, and can be considered typical of other, similarly exposed, locations on Ireland’s west coast.
Resumo:
Understanding the role of marine mammals in specific ecosystems and their interactions with fisheries involves, inter alia, an understanding of their diet and dietary requirements. In this thesis, the foraging ecology of seven marine mammal species that regularly occur in Irish waters was investigated by reconstructing diet using hard parts from digestive tracts and scats. Of the species examined, two (striped and Atlantic white-sided dolphin) can be considered offshore species or species inhabiting neritic waters, while five others usually inhabit more coastal areas (white-beaked dolphin, harbour porpoise, harbour seal and grey seal); the last species studied was the bottlenose dolphin whose population structure is more complex, with coastal and offshore populations. A total of 13,028 prey items from at least 81 different species (62 fish species, 14 cephalopods, four crustaceans, and a tunicate) were identified. 28% of the fish species were identified using bones other than otoliths, highlighting the importance of using all identifiable structures to reconstruct diet. Individually, each species of marine mammal presented a high diversity of prey taxa, but the locally abundant Trisopterus spp. were found to be the most important prey item for all species, indicating that Trisopterus spp. is probably a key species in understanding the role of these predators in Irish waters. In the coastal marine mammals, other Gadiformes species (haddock, pollack, saithe, whiting) also contributed substantially to the diet; in contrast, in pelagic or less coastal marine mammals, prey was largely comprised of planktivorous fish, such as Atlantic mackerel, horse mackerel, blue whiting, and mesopelagic prey. Striped dolphins and Atlantic white-sided dolphins are offshore small cetaceans foraging in neritic waters. Differences between the diet of striped dolphins collected in drift nets targeting tuna and stranded on Irish coasts showed a complex foraging behaviour; the diet information shows that although this dolphin forages mainly in oceanic waters it may occasionally forage on the continental shelf, feeding on available prey. The Atlantic white-sided dolphin diet showed that this species prefers to feed over the continental edge, where planktivorous fish are abundant. Some resource partitioning was found in bottlenose dolphins in Irish waters consistent with previous genetic and stable isotope analysis studies. Bottlenose dolphins in Irish waters appears to be generalist feeders consuming more than 30 prey species, however most of the diet comprised a few locally abundant species, especially gadoid fish including haddock/pollack/saithe group and Trisopterus spp., but the contribution of Atlantic hake, conger eels and the pelagic planktivorous horse mackerel were also important. Stomach content information suggests that three different feeding behaviours might occur in bottlenose dolphin populations in Irish waters; firstly a coastal behaviour, with animals feeding on prey that mainly inhabit areas close to the coast; secondly an offshore behaviour where dolphins feed on offshore species such as squid or mesopelagic fish; and a third more complex behaviour that involves movements over the continental shelf and close to the shelf edge. The other three coastal marine mammal species (harbour porpoise, harbour seal and grey seal) were found to be feeding on similar prey and competition for food resources among these sympatric species might occur. Both species of seals were found to have a high overlap (more than 80%) in their diet composition, but while grey seals feed on large fish (>110mm), harbour seals feed mostly on smaller fish (<110mm), suggesting some spatial segregation in foraging. Harbour porpoises and grey seals are potentially competing for the same food resource but some differences in prey species were found and some habitat partitioning might occur. Direct interaction (by catch) between dolphins and fisheries was detected in all species. Most of the prey found in the stomach contents from both stranded and by catch dolphins were smaller sizes than those targeted by commercial fisheries. In fact, the total annual food consumption of the species studied was found to be very small (225,160 tonnes) in comparison to fishery landings for the same area (~2 million tonnes). However, marine mammal species might be indirectly interacting with fisheries, removing forage fish. Incorporating the dietary information obtained from the four coastal species, an ECOPATH food web model was established for the Irish Sea, based on data from 2004. Five trophic levels were found, with bottlenose dolphins and grey and harbour seals occurring at the highest trophic level. A comparison with a previous model based on 1973 data suggests that while the overall Irish Sea ecosystem appears to be “maturing”, some indices indicate that the 2004 fishery was less efficient and was targeting fish at higher trophic levels than in 1973, which is reflected in the mean trophic level of the catch. Depletion or substantial decrease of some of the Irish Sea fish stocks has resulted in a significant decline in landings in this area. The integration of diet information in mass-balance models to construct ecosystem food-webs will help to understand the trophic role of these apex predators within the ecosystem.
Resumo:
This paper explores the “resource curse” problem as a counter-example of creative performance and innovation by examining reliance on capital and physical resources, showing the gap between expectations and ex-post actual performance became clearer under conditions of economic turmoil. The analysis employs logistic regressions with dichotomous response and predictor variables, showing significant results.Several findings that have use for economic and business practice follow. First, in a transition period, a typical characteristic of successful firms was their reliance on either capital resources or physical asset endowments, whereas the innovation factor was not significant.Second, poor-performing enterprises exhibited evidence of over reliance on both capital and physical assets. Third, firms that relied on both types of resources tended to downplay creative performance. Fourth, reliance on capital/physical resources and adoption of “creative discipline/innovations” tend to be mutually exclusive. In fact, some evidence suggests that firms face more acute problem caused by the law of diminishing returns in troubled times. The Vietnamese corporate sector’s addiction to resources may contribute to economic deterioration, through a downward spiral of lower efficiency leading to consumption of more resources. The “innovation factor” has not been tapped as a source of economic growth. The absence of innovations and creativity has made the notion of “resource curse” become identical to “destructive creation” implemented by ex-ante resource-rich firms, and worsened the problem of resource misallocation in transition turmoil.
Resumo:
William Primrose (1903-1982) and Lionel Tertis (1876-1975) made the viola a grand instrument for public performances of solo and chamber music throughout their long and active lives characterized by a common passion for the viola. I, too, have been deeply inspired by their passion for the viola. I chose, therefore, for my doctoral performance project to feature works for viola from the required repertoire of the William Primrose and Lionel Tertis competitions of 2001 and 2003, respectively. For purposes of the performances, I divided selections from the combined repertoire for the William Primrose and Lionel Tertis competitions into three recitals. The first recital included Sonata, Opus 120, No.2 in E-flat Major (1894) by Johannes Brahms; Sonata, Opus 147 (1975) by Dmitri Shostakovich; and Sonata (1919) by Rebecca Clarke. These pieces represent standard components of the general repertoire for both the Primrose and Tertis competitions. The second recital was comprised of two works dedicated by their composers to Primrose: Lachrymae, Opus 48 (1950) by Benjamin Britten; and Concerto (1945) by Bela Bartok. The third recital included three pieces dedicated by their composers to Tertis: Sonata (1922) by Arnold Bax; Sonata in C Minor (1905) by York Bowen; and Sonata (1952) by Arthur Bliss. The goal of my preparation for these recitals was to emphasize a variety of techniques and, also, the unique timbre of the viola. For example, the works I selected emphasized high-position technique, which was not much used before the nineteenth century, and featured the lowest string (the C-string), which provides a beautifully somber and austere sonority characteristic of the viola. For these reasons, the selected works provided not only attractive and interesting pieces to study and perform but were also of educational merit.
Resumo:
In many important high-technology markets, including software development, data processing, communications, aeronautics, and defense, suppliers learn through experience how to provide better service at lower cost. This paper examines how a buyer designs dynamic competition among rival suppliers to exploit learning economies while minimizing the costs of becoming locked in to one producer. Strategies for controlling dynamic competition include the handicapping of more efficient suppliers in procurement competitions, the protection and allocation of intellectual property, and the sharing of information among rival suppliers. (JEL C73, D44, L10).