867 resultados para Emergent and Distributed Systems (IJPEDS)
Resumo:
Aquatic agricultural systems (AAS) are diverse production and livelihood systems where families cultivate a range of crops, raise livestock, farm or catch fish, gather fruits and other tree crops, and harness natural resources such as timber, reeds, and wildlife. Aquatic agricultural systems occur along freshwater floodplains, coastal deltas, and inshore marine waters, and are characterized by dependence on seasonal changes in productivity, driven by seasonal variation in rainfall, river flow, and/or coastal and marine processes. Despite this natural productivity, the farming, fishing, and herding communities who live in these systems are among the poorest and most vulnerable in their countries and regions. This report provides an overview of the scale and scope of development challenges in coastal aquatic agricultural systems, their significance for poor and vulnerable communities, and the opportunities for partnership and investment that support efforts of these communities to secure resilient livelihoods in the face of multiple risks.
Resumo:
In studying hydrosphere, atmosphere, and biosphere interactions, it is useful to focus on specific subsystem processes and energy exchanges (forcing). Since subsystem scales range over ten orders of magnitude, it may be difficult to focus research on scales that will yield useful results in terms of establishing causal and predictive connections between more easily and less easily observed subsystems. In an effort to find pertinent scales, we have begun empirical investigations into relationships between atmospheric, oceanic, and biological systems having spatial scales exceeding 10^3 kilometers and temporal scales of six months or more.
Resumo:
Sociomateriality has been attracting growing attention in the Organization Studies and Information Systems literatures since 2007, with more than 140 journal articles now referring to the concept. Over 80 percent of these articles have been published since January 2011 and almost all cite the work of Orlikowski (2007, 2010; Orlikowski and Scott 2008) as the source of the concept. Only a few, however, address all of the notions that Orlikowski suggests are entailed in sociomateriality, namely materiality, inseparability, relationality, performativity, and practices, with many employing the concept quite selectively. The contribution of sociomateriality to these literatures is, therefore, still unclear. Drawing on evidence from an ongoing study of the adoption of a computer-based clinical information system in a hospital critical care unit, this paper explores whether the notions, individually and collectively, offer a distinctive and coherent account of the relationship between the social and the material that may be useful in Information Systems research. It is argued that if sociomateriality is to be more than simply a label for research employing a number of loosely related existing theoretical approaches, then studies employing the concept need to pay greater attention to the notions entailed in it and to differences in their interpretation.
Resumo:
In this paper, two models of coalition and income's distribution in FSCS (fuzzy supply chain systems) are proposed based on the fuzzy set theory and fuzzy cooperative game theory. The fuzzy dynamic coalition choice's recursive equations are constructed in terms of sup-t composition of fuzzy relations, where t is a triangular norm. The existence of the fuzzy relations in FSCS is also proved. On the other hand, the approaches to ascertain the fuzzy coalition through the choice's recursive equations and distribute the fuzzy income in FSCS by the fuzzy Shapley values are also given. These models are discussed in two parts: the fuzzy dynamic coalition choice of different units in FSCS; the fuzzy income's distribution model among different participators in the same coalition. Furthermore, numerical examples are given aiming at illustrating these models., and the results show that these models are feasible and validity in FSCS.
Resumo:
We derive a class of inequalities for detecting entanglement in the mixed SU(2) and SU(1, 1) systems based on the Schrodinger-Robertson indeterminacy relations in conjugation with the partial transposition. These inequalities are in general stronger than those based on the usual Heisenberg uncertainty relations for detecting entanglement. Furthermore, based on the complete reduction from SU(2) and SU(1,1) systems to bosonic systems, we derive some entanglement conditions for two-mode systems. We also use the partial reduction to obtain some inequalities in the mixed SU(2) (or SU(1, 1)) and bosonic systems.
Resumo:
Change in thermal conditions can substantially affect crop growth, cropping systems, agricultural production and land use. In the present study, we used annual accumulated temperatures > 10 degrees C (AAT10) as an indicator to investigate the spatio-temporal changes in thermal conditions across China from the late 1980s to 2000, with a spatial resolution of 1 x 1 km. We also investigated the effects of the spatio-temporal changes on cultivated land use and cropping systems. We found that AAT10 has increased on a national scale since the late 1980s, Particularly, 3.16 x 10(5) km(2) of land moved from the spring wheat zone (AAT10: 1600 to 3400 degrees C) to the winter wheat zone (AAT10: 3400 to 4500 degrees C). Changes in thermal conditions had large influences on cultivated land area and cropping systems. The areas of cultivated land have increased in regions with increasing AAT10, and the cropping rotation index has increased since the late 1980s. Single cropping was replaced by 3 crops in 2 years in many regions, and areas of winter wheat cultivation were shifted northward in some areas, such as in the eastern Inner Mongolia Autonomous Region and in western Liaoning and Jilin Provinces.
Resumo:
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
Resumo:
Strategic frameworks seeking to explain how an organisation may generate superior performance are numerous. Earlier approaches centred on the competitive position of an organisation within its industry, with subsequent attention focused on an organisation's core competences. More recently, research has concentrated on knowledge and organisational learning. By reference to a study of airline-developed computer reservation systems (CRSs), this article explores the strategic importance of information in creating knowledge to generate superior performance. By examining developments in the use, management and control of information derived from CRSs, evidence is presented to explain how CRS-owning airlines have circumvented regulatory controls and increasingly competition to sustain competitive advantage through the development of their information and knowledge systems. This research demonstrates the need for organisations to develop 'knowledge facilitators' that foster the creation of new knowledge. Equally, managers must develop 'knowledge inhibitors' that help to sustain competitive advantage by limiting the abilities of competitors to create knowledge themselves.
Resumo:
Bradshaw, K. & Urquhart, C. (2005). Theory and practice in strategic planning for health information systems. In: D. Wainwright (Ed.), UK Academy for Information Systems 10th conference 2005, 22-24 March 2005 (CD-ROM). Newcastle upon Tyne: Northumbria University.
Resumo:
Huelse, M., Wischmann, S., Manoonpong, P., Twickel, A.v., Pasemann, F.: Dynamical Systems in the Sensorimotor Loop: On the Interrelation Between Internal and External Mechanisms of Evolved Robot Behavior. In: M. Lungarella, F. Iida, J. Bongard, R. Pfeifer (Eds.) 50 Years of Artificial Intelligence, LNCS 4850, Springer, 186 - 195, 2007.
Resumo:
We consider the general problem of synchronizing the data on two devices using a minimum amount of communication, a core infrastructural requirement for a large variety of distributed systems. Our approach considers the interactive synchronization of prioritized data, where, for example, certain information is more time-sensitive than other information. We propose and analyze a new scheme for efficient priority-based synchronization, which promises benefits over conventional synchronization.
Resumo:
The proliferation of inexpensive workstations and networks has prompted several researchers to use such distributed systems for parallel computing. Attempts have been made to offer a shared-memory programming model on such distributed memory computers. Most systems provide a shared-memory that is coherent in that all processes that use it agree on the order of all memory events. This dissertation explores the possibility of a significant improvement in the performance of some applications when they use non-coherent memory. First, a new formal model to describe existing non-coherent memories is developed. I use this model to prove that certain problems can be solved using asynchronous iterative algorithms on shared-memory in which the coherence constraints are substantially relaxed. In the course of the development of the model I discovered a new type of non-coherent behavior called Local Consistency. Second, a programming model, Mermera, is proposed. It provides programmers with a choice of hierarchically related non-coherent behaviors along with one coherent behavior. Thus, one can trade-off the ease of programming with coherent memory for improved performance with non-coherent memory. As an example, I present a program to solve a linear system of equations using an asynchronous iterative algorithm. This program uses all the behaviors offered by Mermera. Third, I describe the implementation of Mermera on a BBN Butterfly TC2000 and on a network of workstations. The performance of a version of the equation solving program that uses all the behaviors of Mermera is compared with that of a version that uses coherent behavior only. For a system of 1000 equations the former exhibits at least a 5-fold improvement in convergence time over the latter. The version using coherent behavior only does not benefit from employing more than one workstation to solve the problem while the program using non-coherent behavior continues to achieve improved performance as the number of workstations is increased from 1 to 6. This measurement corroborates our belief that non-coherent shared memory can be a performance boon for some applications.
Resumo:
Communication and synchronization stand as the dual bottlenecks in the performance of parallel systems, and especially those that attempt to alleviate the programming burden by incurring overhead in these two domains. We formulate the notions of communicable memory and lazy barriers to help achieve efficient communication and synchronization. These concepts are developed in the context of BSPk, a toolkit library for programming networks of workstations|and other distributed memory architectures in general|based on the Bulk Synchronous Parallel (BSP) model. BSPk emphasizes efficiency in communication by minimizing local memory-to-memory copying, and in barrier synchronization by not forcing a process to wait unless it needs remote data. Both the message passing (MP) and distributed shared memory (DSM) programming styles are supported in BSPk. MP helps processes efficiently exchange short-lived unnamed data values, when the identity of either the sender or receiver is known to the other party. By contrast, DSM supports communication between processes that may be mutually anonymous, so long as they can agree on variable names in which to store shared temporary or long-lived data.
Resumo:
We examine the question of whether to employ the first-come-first-served (FCFS) discipline or the processor-sharing (PS) discipline at the hosts in a distributed server system. We are interested in the case in which service times are drawn from a heavy-tailed distribution, and so have very high variability. Traditional wisdom when task sizes are highly variable would prefer the PS discipline, because it allows small tasks to avoid being delayed behind large tasks in a queue. However, we show that system performance can actually be significantly better under FCFS queueing, if each task is assigned to a host based on the task's size. By task assignment, we mean an algorithm that inspects incoming tasks and assigns them to hosts for service. The particular task assignment policy we propose is called SITA-E: Size Interval Task Assignment with Equal Load. Surprisingly, under SITA-E, FCFS queueing typically outperforms the PS discipline by a factor of about two, as measured by mean waiting time and mean slowdown (waiting time of task divided by its service time). We compare the FCFS/SITA-E policy to the processor-sharing case analytically; in addition we compare it to a number of other policies in simulation. We show that the benefits of SITA-E are present even in small-scale distributed systems (four or more hosts). Furthermore, SITA-E is a static policy that does not incorporate feedback knowledge of the state of the hosts, which allows for a simple and scalable implementation.