820 resultados para Local management
Resumo:
Tedd, L.A. (2007). Library management systems in the UK: 1960s-1980s. Library History, 23(4),301-316 Originally published (as above) by Maney Publishing.
Resumo:
Barry, L., Tedd, L.A. (2008). Local studies collections online: an investigation in Irish public libraries. Program: electronic library and information systems, 42(2), 163-186.
Resumo:
Dennis, P., Aspinall, R. J., Gordon, I. J. (2002). Spatial distribution of upland beetles in relation to landform vegetation and grazing management. Basic and Applied Ecology, 3 (2), 183?193. Sponsorship: SEERAD RAE2008
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently adopt a set of local policies that specify which routes it accepts and advertises from/to other networks, as well as which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme (APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially-conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts (policy conflict-avoidance vs. -control mode), each AS dynamically adjusts its own path preferences—increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the substability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, routing load, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.
Resumo:
The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently define a set of local policies on which routes it accepts and advertises from/to other networks, as well as on which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme(APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts, each AS dynamically adjusts its own path preferences---increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the sub-stability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.
Resumo:
Open environments involve distributed entities interacting with each other in an open manner. Many distributed entities are unknown to each other but need to collaborate and share resources in a secure fashion. Usually resource owners alone decide who is trusted to access their resources. Since resource owners in open environments do not have a complete picture of all trusted entities, trust management frameworks are used to ensure that only authorized entities will access requested resources. Every trust management system has limitations, and the limitations can be exploited by malicious entities. One vulnerability is due to the lack of globally unique interpretation for permission specifications. This limitation means that a malicious entity which receives a permission in one domain may misuse the permission in another domain via some deceptive but apparently authorized route; this malicious behaviour is called subterfuge. This thesis develops a secure approach, Subterfuge Safe Trust Management (SSTM), that prevents subterfuge by malicious entities. SSTM employs the Subterfuge Safe Authorization Language (SSAL) which uses the idea of a local permission with a globally unique interpretation (localPermission) to resolve the misinterpretation of permissions. We model and implement SSAL with an ontology-based approach, SSALO, which provides a generic representation for knowledge related to the SSAL-based security policy. SSALO enables integration of heterogeneous security policies which is useful for secure cooperation among principals in open environments where each principal may have a different security policy with different implementation. The other advantage of an ontology-based approach is the Open World Assumption, whereby reasoning over an existing security policy is easily extended to include further security policies that might be discovered in an open distributed environment. We add two extra SSAL rules to support dynamic coalition formation and secure cooperation among coalitions. Secure federation of cloud computing platforms and secure federation of XMPP servers are presented as case studies of SSTM. The results show that SSTM provides robust accountability for the use of permissions in federation. It is also shown that SSAL is a suitable policy language to express the subterfuge-safe policy statements due to its well-defined semantics, ease of use, and integrability.
Resumo:
The past few years have witnessed an exponential increase in studies trying to identify molecular markers in patients with breast tumours that might predict for the success or failure of hormonal therapy or chemotherapy. HER2, a tyrosine kinase membrane receptor of the epidermal growth factor receptor family, has been the most widely studied marker in this respect. This paper attempts to critically review to what extent HER2 may improve 'treatment individualisation' for the breast cancer patient. Copyright (C) 2000.
Resumo:
Community-based management and the establishment of marine reserves have been advocated worldwide as means to overcome overexploitation of fisheries. Yet, researchers and managers are divided regarding the effectiveness of these measures. The "tragedy of the commons" model is often accepted as a universal paradigm, which assumes that unless managed by the State or privatized, common-pool resources are inevitably overexploited due to conflicts between the self-interest of individuals and the goals of a group as a whole. Under this paradigm, the emergence and maintenance of effective community-based efforts that include cooperative risky decisions as the establishment of marine reserves could not occur. In this paper, we question these assumptions and show that outcomes of commons dilemmas can be complex and scale-dependent. We studied the evolution and effectiveness of a community-based management effort to establish, monitor, and enforce a marine reserve network in the Gulf of California, Mexico. Our findings build on social and ecological research before (1997-2001), during (2002) and after (2003-2004) the establishment of marine reserves, which included participant observation in >100 fishing trips and meetings, interviews, as well as fishery dependent and independent monitoring. We found that locally crafted and enforced harvesting rules led to a rapid increase in resource abundance. Nevertheless, news about this increase spread quickly at a regional scale, resulting in poaching from outsiders and a subsequent rapid cascading effect on fishing resources and locally-designed rule compliance. We show that cooperation for management of common-pool fisheries, in which marine reserves form a core component of the system, can emerge, evolve rapidly, and be effective at a local scale even in recently organized fisheries. Stakeholder participation in monitoring, where there is a rapid feedback of the systems response, can play a key role in reinforcing cooperation. However, without cross-scale linkages with higher levels of governance, increase of local fishery stocks may attract outsiders who, if not restricted, will overharvest and threaten local governance. Fishers and fishing communities require incentives to maintain their management efforts. Rewarding local effective management with formal cross-scale governance recognition and support can generate these incentives.
Resumo:
Gemstone Team GREEN JUSTICE
Resumo:
A distinctive subset of metastatic breast cancer (MBC) is oligometastatic disease, which is characterized by single or few detectable metastatic lesions. The existing treatment guidelines for patients with localized MBC include surgery, radiotherapy, and regional chemotherapy. The European School of Oncology-Metastatic Breast Cancer Task Force addressed the management of these patients in its first consensus recommendations published in 2007. The Task Force endorsed the possibility of a more aggressive and multidisciplinary approach for patients with oligometastatic disease, stressing also the need for clinical trials in this patient population. At the sixth European Breast Cancer Conference, held in Berlin in March 2008, the second public session on MBC guidelines addressed the controversial issue of whether MBC can be cured. In this commentary, we summarize the discussion and related recommendations regarding the available therapeutic options that are possibly associated with cure in these patients. In particular, data on local (surgery and radiotherapy) and chemotherapy options are discussed. Large retrospective series show an association between surgical removal of the primary tumor or of lung metastases and improved long-term outcome in patients with oligometastatic disease. In the absence of data from prospective randomized studies, removal of the primary tumor or isolated metastatic lesions may be an attractive therapeutic strategy in this subset of patients, offering rapid disease control and potential for survival benefit. Some improvement in outcome may also be achieved with optimization of systemic therapies, possibly in combination with optimal local treatment. © 2010. Published by Oxford University Press.
Resumo:
A cross-domain workflow application may be constructed using a standard reference model such as the one by the Workflow Management Coalition (WfMC) [7] but the requirements for this type of application are inherently different from one organization to another. The existing models and systems built around them meet some but not all the requirements from all the organizations involved in a collaborative process. Furthermore the requirements change over time. This makes the applications difficult to develop and distribute. Service Oriented Architecture (SOA) based approaches such as the BPET (Business Process Execution Language) intend to provide a solution but fail to address the problems sufficiently, especially in the situations where the expectations and level of skills of the users (e.g. the participants of the processes) in different organisations are likely to be different. In this paper, we discuss a design pattern that provides a novel approach towards a solution. In the solution, business users can design the applications at a high level of abstraction: the use cases and user interactions; the designs are documented and used, together with the data and events captured later that represents the user interactions with the systems, to feed an intermediate component local to the users -the IFM (InterFace Mapper) -which bridges the gaps between the users and the systems. We discuss the main issues faced in the design and prototyping. The approach alleviates the need for re-programming with the APIs to any back-end service thus easing the development and distribution of the applications
Resumo:
The theory of New Public Management (NPM) suggest that one of the features of advanced liberal rule is the tendency to define social, economic and political issues as problems to be solved through management. This paper argues that the restructuring of Higher Education (HE) in many Western countries since the 1980s has involved a shift from an emphasis on administration and policy to one of its efficient management. Utilising Foucault’s concept of governmentality rather than the liberal discourse of management as a politically neutral technology, managerialism can be seen as a newly emergent and increasingly rationalised disciplinary regime of governmentalising practices in advanced liberalism. As such the contemporary University as an institution governed by NPM can be demonstrated to have emerged not as the direct outcome of democratic policies that have rationalised its activities (so that the emancipatory aims of personal development, an educated workforce and of true research can be fully realised), nor can it be understood as the instrument through which individuals or self-realising classes are defeated through the calculations of the state acting on behalf of economic interests, rather it can be seen as the contingent and intractable outcome of the complex power/knowledge relations of advanced liberalism. I analyse the interlocking of the ‘tutor-subject’ and ‘student-subject’ as a local enacting of policy discourse informed by the NPM of HE that reshapes subjectivity and retunes the relationship between tutor and student. I put forward suggestions for how resistance to these new modes of disciplinary subjectification can be enacted.
Resumo:
Regime shifts are abrupt changes between contrasting, persistent states of any complex system. The potential for their prediction in the ocean and possible management depends upon the characteristics of the regime shifts: their drivers (from anthropogenic to natural), scale (from the local to the basin) and potential for management action (from adaptation to mitigation). We present a conceptual framework that will enhance our ability to detect, predict and manage regime shifts in the ocean, illustrating our approach with three well-documented examples: the North Pacific, the North Sea and Caribbean coral reefs. We conclude that the ability to adapt to, or manage, regime shifts depends upon their uniqueness, our understanding of their causes and linkages among ecosystem components and our observational capabilities.