977 resultados para Information complexity
Resumo:
Justification Logic studies epistemic and provability phenomena by introducing justifications/proofs into the language in the form of justification terms. Pure justification logics serve as counterparts of traditional modal epistemic logics, and hybrid logics combine epistemic modalities with justification terms. The computational complexity of pure justification logics is typically lower than that of the corresponding modal logics. Moreover, the so-called reflected fragments, which still contain complete information about the respective justification logics, are known to be in~NP for a wide range of justification logics, pure and hybrid alike. This paper shows that, under reasonable additional restrictions, these reflected fragments are NP-complete, thereby proving a matching lower bound. The proof method is then extended to provide a uniform proof that the corresponding full pure justification logics are $\Pi^p_2$-hard, reproving and generalizing an earlier result by Milnikel.
Resumo:
Successful software systems cope with complexity by organizing classes into packages. However, a particular organization may be neither straightforward nor obvious for a given developer. As a consequence, classes can be misplaced, leading to duplicated code and ripple effects with minor changes effecting multiple packages. We claim that contextual information is the key to rearchitecture a system. Exploiting contextual information, we propose a technique to detect misplaced classes by analyzing how client packages access the classes of a given provider package. We define locality as a measure of the degree to which classes reused by common clients appear in the same package. We then use locality to guide a simulated annealing algorithm to obtain optimal placements of classes in packages. The result is the identification of classes that are candidates for relocation. We apply the technique to three applications and validate the usefulness of our approach via developer interviews.
Resumo:
Along with the growing complexity of logistic chains the demand for transparency of informations has increased. The use of intelligent RFID-Technology offers the possibility to optimize and control all capacities in use, since it enables the identification and tracking of goods alongside the entire supply chain. Every single product can be located at any given time and a multitude of current and historical data can be transferred. The interaction of the flow of material and the flow of information between the various process steps can be optimized by using RFID-Technology since it guarantees that all required data is available at the right time and at the right place. The local accessibility and convertibility of data allows a flexible, decentralised control of logistic systems. As additional advantages of RFID-Components can be considered that they are individually writable and that their identification can be achieved over considerable distances even if there is no intervisibility between tag and reader. The use of RFID-Transponder opens up new potentials regarding process security, reduction of logistic costs or availability of products. These advantages depend on reliability of the identification processes. The undisputed potentials that are made accessible by the use of RFID-Elements can only be beneficial when the informations that are decentralised and attached to goods and loading equipment can be reliably retrieved at the required points. The communication between tag and reader can be influenced by different materials such as metal, that can disturbed or complicate the radio contact. The communications reliability is subject of various tests and experiments that analyse the effects of different filling materials as well as different alignments of tags on the loading equipment.
Resumo:
The development of path-dependent processes basically refers to positive feedback in terms of increasing returns as the main driving forces of such processes. Furthermore, path dependence can be affected by context factors, such as different degrees of complexity. Up to now, it has been unclear whether and how different settings of complexity impact path-dependent processes and the probability of lock-in. In this paper we investigate the relationship between environmental complexity and path dependence by means of an experimental study. By focusing on the mode of information load and decision quality in chronological sequences, the study explores the impact of complexity on decision-making processes. The results contribute to both the development of path-dependence theory and a better understanding of decision-making behavior under conditions of positive feedback. Since previous path research has mostly applied qualitative case-study research and (to a minor part) simulations, this paper makes a further contribution by establishing an experimental approach for research on path dependence.
Resumo:
Simulation techniques are almost indispensable in the analysis of complex systems. Materials- and related information flow processes in logistics often possess such complexity. Further problem arise as the processes change over time and pose a Big Data problem as well. To cope with these issues adaptive simulations are more and more frequently used. This paper presents a few relevant advanced simulation models and intro-duces a novel model structure, which unifies modelling of geometrical relations and time processes. This way the process structure and their geometric relations can be handled in a well understandable and transparent way. Capabilities and applicability of the model is also presented via a demonstrational example.
Resumo:
Species adapted to cold-climatic mountain environments are expected to face a high risk of range contractions, if not local extinctions under climate change. Yet, the populations of many endothermic species may not be primarily affected by physiological constraints, but indirectly by climate-induced changes of habitat characteristics. In mountain forests, where vertebrate species largely depend on vegetation composition and structure, deteriorating habitat suitability may thus be mitigated or even compensated by habitat management aiming at compositional and structural enhancement. We tested this possibility using four cold-adapted bird species with complementary habitat requirements as model organisms. Based on species data and environmental information collected in 300 1-km2 grid cells distributed across four mountain ranges in central Europe, we investigated (1) how species’ occurrence is explained by climate, landscape, and vegetation, (2) to what extent climate change and climate-induced vegetation changes will affect habitat suitability, and (3) whether these changes could be compensated by adaptive habitat management. Species presence was modelled as a function of climate, landscape and vegetation variables under current climate; moreover, vegetation-climate relationships were assessed. The models were extrapolated to the climatic conditions of 2050, assuming the moderate IPCC-scenario A1B, and changes in species’ occurrence probability were quantified. Finally, we assessed the maximum increase in occurrence probability that could be achieved by modifying one or multiple vegetation variables under altered climate conditions. Climate variables contributed significantly to explaining species occurrence, and expected climatic changes, as well as climate-induced vegetation trends, decreased the occurrence probability of all four species, particularly at the low-altitudinal margins of their distribution. These effects could be partly compensated by modifying single vegetation factors, but full compensation would only be achieved if several factors were changed in concert. The results illustrate the possibilities and limitations of adaptive species conservation management under climate change.
Resumo:
This book attempts to synthesize research that contributes to a better understanding of how to reach sustainable business value through information systems (IS) outsourcing. Important topics in this realm are how IS outsourcing can contribute to innovation, how it can be dynamically governed, how to cope with its increasing complexity through multi-vendor arrangements, how service quality standards can be met, how corporate social responsibility can be upheld, and how to cope with increasing demands of internationalization and new sourcing models, such as crowdsourcing and platform-based cooperation. These issues are viewed from either the client or vendor perspective, or both. The book should be of interest to all academics and students in the fields of Information Systems, Management, and Organization as well as corporate executives and professionals who seek a more profound analysis and understanding of the underlying factors and mechanisms of outsourcing.
Resumo:
This book attempts to synthesize research that contributes to a better understanding of how to reach sustainable business value through information systems (IS) outsourcing. Important topics in this realm are how IS outsourcing can contribute to innovation, how it can be dynamically governed, how to cope with its increasing complexity through multi-vendor arrangements, how service quality standards can be met, how corporate social responsibility can be upheld and how to cope with increasing demands of internationalization and new sourcing models, such as crowdsourcing and platform-based cooperation. These issues are viewed from either the client or vendor perspective, or both. The book should be of interest to all academics and students in the fields of Information Systems, Management and Organization as well as corporate executives and professionals who seek a more profound analysis and understanding of the underlying factors and mechanisms of outsourcing.
Resumo:
Most commercial project management software packages include planning methods to devise schedules for resource-constrained projects. As it is proprietary information of the software vendors which planning methods are implemented, the question arises how the software packages differ in quality with respect to their resource-allocation capabilities. We experimentally evaluate the resource-allocation capabilities of eight recent software packages by using 1,560 instances with 30, 60, and 120 activities of the well-known PSPLIB library. In some of the analyzed packages, the user may influence the resource allocation by means of multi-level priority rules, whereas in other packages, only few options can be chosen. We study the impact of various complexity parameters and priority rules on the project duration obtained by the software packages. The results indicate that the resource-allocation capabilities of these packages differ significantly. In general, the relative gap between the packages gets larger with increasing resource scarcity and with increasing number of activities. Moreover, the selection of the priority rule has a considerable impact on the project duration. Surprisingly, when selecting a priority rule in the packages where it is possible, both the mean and the variance of the project duration are in general worse than for the packages which do not offer the selection of a priority rule.
Resumo:
Many studies obtained reliable individual differences in speed of information processing (SIP) as measured by elementary cognitive tasks (ECTs). ECTs usually employ response times (RT) as measure of SIP, but different ECTs target different cognitive processes (e.g., simple or choice reaction, inhibition). Here we used modified versions of the Hick and the Eriksen Flanker task to examine whether these tasks assess dissociable or common aspects of SIP. In both tasks, task complexity was systematically varied across three levels. RT data were collected from 135 participants. Applying fixed-links modeling, RT variance increasing with task complexity was separated from RT variance unchanging across conditions. For each task, these aspects of variance were represented by two independent latent variables. The two latent variables representing RT variance not varying with complexity of the two tasks were virtually identical (r = .83). The latent variables representing increasing complexity in the two tasks were also highly correlated (r = .72) but clearly dissociable. Thus, RT measures contain both task-unspecific, person-related aspects of SIP as well as task-specific aspects indicating the cognitive processes manipulated with the respective task. Separating these aspects of SIP facilitates the interpretation of individual differences in RT.
Resumo:
The logic PJ is a probabilistic logic defined by adding (noniterated) probability operators to the basic justification logic J. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic PJ. The main result of the paper is that the complexity of the derivability problem in PJ remains the same as the complexity of the derivability problem in the underlying logic J, which is π[p/2] -complete. This implies that the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper filis this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compiletime/ run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, aimed at performing as much work as possible at compile-time. The approach is based on the knowledge of certain properties regarding the run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed. Thus, the paper does not deal with the analysis itself, but rather with how the analysis results can be used to parallelize programs.
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, presumably because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper fills this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compile- time/run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, attempting to perform as much work as possible at compiletime. The approach is based on the knowledge of certain properties about run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed.