893 resultados para Simplicity
Resumo:
“In the midst of order, there is chaos; but in the midst of chaos, there is order”, John Gribbin wrote in his book Deep Simplicity (p.76). In this dialectical spirit, we discuss the generative tension between complexity and simplicity in the theory and practice of management and organization. Complexity theory suggests that the relationship between complex environments and complex organizations advanced by the well-known Ashby’s law, may be reconsidered: only simple organization provides enough space for individual agency to match environmental turbulence in the form of complex organizational responses. We suggest that complex organizing may be paradoxically facilitated by a simple infrastructure, and that the theory of organizations may be viewed as resulting from the interplay between simplicity and complexity.
Resumo:
Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.
Resumo:
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Resumo:
Monográfico con el título: 'The debate on language acquisitions: constructivism versus innatism'. Resumen basado en el de la publicación
Resumo:
What is the relationship between magnitude judgments relying on directly available characteristics versus probabilistic cues? Question frame was manipulated in a comparative judgment task previously assumed to involve inference across a probabilistic mental model (e.g., “which city is largest” – the “larger” question – versus “which city is smallest” – the “smaller” question). Participants identified either the largest or smallest city (Experiments 1a, 2) or the richest or poorest person (Experiment 1b) in a three-alternative forced choice (3-AFC) task (Experiment 1) or 2-AFC task (Experiment 2). Response times revealed an interaction between question frame and the number of options recognized. When asked the smaller question, response times were shorter when none of the options were recognized. The opposite pattern was found when asked the larger question: response time was shorter when all options were recognized. These task-stimuli congruity results in judgment under uncertainty are consistent with, and predicted by, theories of magnitude comparison which make use of deductive inferences from declarative knowledge.
Resumo:
Grassroots innovations (GI) are promising examples of deliberate transformation of socio-technical systems towards resilience and sustainability. However, evidence is needed on the factors that limit or enable their success. This paper set out to study how GI use narratives to empower innovation in the face of incumbent socio-technical regimes. Institutional documents were comparatively analyzed to assess how the narratives influence the structure, form of action and external interactions of two Italian grassroots networks, Bilanci di Giustizia and Transition Network Italy. The paper finds an internal consistency between narratives and strategies for each of the two networks. The paper also highlights core similarities, but also significant differences in the ethical basis of the two narratives, and in the organizations and strategies. Such differences determine different forms of innovation empowerment and expose the niche to different potentials to transform incumbent regimes, or to the risk of being co-opted by them.
Resumo:
Complex networks can be understood as graphs whose connectivity properties deviate from those of regular or near-regular graphs, which are understood as being ""simple"". While a great deal of the attention so far dedicated to complex networks has been duly driven by the ""complex"" nature of these structures, in this work we address the identification of their simplicity. The basic idea is to seek for subgraphs whose nodes exhibit similar measurements. This approach paves the way for complementing the characterization of networks, including results suggesting that the protein-protein interaction networks, and to a lesser extent also the Internet, may be getting simpler over time. Copyright (C) EPLA, 2009
Resumo:
If quantum interference patterns in the hearts of polycyclic aromatic hydrocarbons (PAHs) could be isolated and manipulated, then a significant step towards realizing the potential of single-molecule electronics would be achieved. Here we demonstrate experimentally and theoretically that a simple, parameter-free, analytic theory of interference patterns evaluated at the mid-point of the HOMO-LUMO gap (referred to as M-functions) correctly predicts conductance ratios of molecules with pyrene, naphthalene, anthracene, anthanthrene or azulene hearts. M-functions provide new design strategies for identifying molecules with phase-coherent logic functions and enhancing the sensitivity of molecular-scale interferometers.