17 resultados para Task complexity

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reuse of existing carefully designed and tested software improves the quality of new software systems and reduces their development costs. Object-oriented frameworks provide an established means for software reuse on the levels of both architectural design and concrete implementation. Unfortunately, due to frame-works complexity that typically results from their flexibility and overall abstract nature, there are severe problems in using frameworks. Patterns are generally accepted as a convenient way of documenting frameworks and their reuse interfaces. In this thesis it is argued, however, that mere static documentation is not enough to solve the problems related to framework usage. Instead, proper interactive assistance tools are needed in order to enable system-atic framework-based software production. This thesis shows how patterns that document a framework s reuse interface can be represented as dependency graphs, and how dynamic lists of programming tasks can be generated from those graphs to assist the process of using a framework to build an application. This approach to framework specialization combines the ideas of framework cookbooks and task-oriented user interfaces. Tasks provide assistance in (1) cre-ating new code that complies with the framework reuse interface specification, (2) assuring the consistency between existing code and the specification, and (3) adjusting existing code to meet the terms of the specification. Besides illustrating how task-orientation can be applied in the context of using frameworks, this thesis describes a systematic methodology for modeling any framework reuse interface in terms of software patterns based on dependency graphs. The methodology shows how framework-specific reuse interface specifi-cations can be derived from a library of existing reusable pattern hierarchies. Since the methodology focuses on reusing patterns, it also alleviates the recog-nized problem of framework reuse interface specification becoming complicated and unmanageable for frameworks of realistic size. The ideas and methods proposed in this thesis have been tested through imple-menting a framework specialization tool called JavaFrames. JavaFrames uses role-based patterns that specify a reuse interface of a framework to guide frame-work specialization in a task-oriented manner. This thesis reports the results of cases studies in which JavaFrames and the hierarchical framework reuse inter-face modeling methodology were applied to the Struts web application frame-work and the JHotDraw drawing editor framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The point of departure in this dissertation was the practical safety problem of unanticipated, unfamiliar events and unexpected changes in the environment, the demanding situations which the operators should take care of in the complex socio-technical systems. The aim of this thesis was to increase the understanding of demanding situations and of the resources for coping with these situations by presenting a new construct, a conceptual model called Expert Identity (ExId) as a way to open up new solutions to the problem of demanding situations and by testing the model in empirical studies on operator work. The premises of the Core-Task Analysis (CTA) framework were adopted as a starting point: core-task oriented working practices promote the system efficiency (incl. safety, productivity and well-being targets) and that should be supported. The negative effects of stress were summarised and the possible countermeasures related to the operators' personal resources such as experience, expertise, sense of control, conceptions of work and self etc. were considered. ExId was proposed as a way to bring emotional-energetic depth into the work analysis and to supplement CTA-based practical methods to discover development challenges and to contribute to the development of complex socio-technical systems. The potential of ExId to promote understanding of operator work was demonstrated in the context of the six empirical studies on operator work. Each of these studies had its own practical objectives within the corresponding quite broad focuses of the studies. The concluding research questions were: 1) Are the assumptions made in ExId on the basis of the different theories and previous studies supported by the empirical findings? 2) Does the ExId construct promote understanding of the operator work in empirical studies? 3) What are the strengths and weaknesses of the ExId construct? The layers and the assumptions of the development of expert identity appeared to gain evidence. The new conceptual model worked as a part of an analysis of different kinds of data, as a part of different methods used for different purposes, in different work contexts. The results showed that the operators had problems in taking care of the core task resulting from the discrepancy between the demands and resources (either personal or external). The changes of work, the difficulties in reaching the real content of work in the organisation and the limits of the practical means of support had complicated the problem and limited the possibilities of the development actions within the case organisations. Personal resources seemed to be sensitive to the changes, adaptation is taking place, but not deeply or quickly enough. Furthermore, the results showed several characteristics of the studied contexts that complicated the operators' possibilities to grow into or with the demands and to develop practices, expertise and expert identity matching the core task. They were: discontinuation of the work demands, discrepancy between conceptions of work held in the other parts of organisation, visions and the reality faced by the operators, emphasis on the individual efforts and situational solutions. The potential of ExId to open up new paths to solving the problem of the demanding situations and its ability to enable studies on practices in the field was considered in the discussion. The results were interpreted as promising enough to encourage the conduction of further studies on ExId. This dissertation proposes especially contribution to supporting the workers in recognising the changing demands and their possibilities for growing with them when aiming to support human performance in complex socio-technical systems, both in designing the systems and solving the existing problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Failures in industrial organizations dealing with hazardous technologies can have widespread consequences for the safety of the workers and the general population. Psychology can have a major role in contributing to the safe and reliable operation of these technologies. Most current models of safety management in complex sociotechnical systems such as nuclear power plant maintenance are either non-contextual or based on an overly-rational image of an organization. Thus, they fail to grasp either the actual requirements of the work or the socially-constructed nature of the work in question. The general aim of the present study is to develop and test a methodology for contextual assessment of organizational culture in complex sociotechnical systems. This is done by demonstrating the findings that the application of the emerging methodology produces in the domain of maintenance of a nuclear power plant (NPP). The concepts of organizational culture and organizational core task (OCT) are operationalized and tested in the case studies. We argue that when the complexity of the work, technology and social environment is increased, the significance of the most implicit features of organizational culture as a means of coordinating the work and achieving safety and effectiveness of the activities also increases. For this reason a cultural perspective could provide additional insight into the problem of safety management. The present study aims to determine; (1) the elements of the organizational culture in complex sociotechnical systems; (2) the demands the maintenance task sets for the organizational culture; (3) how the current organizational culture at the case organizations supports the perception and fulfilment of the demands of the maintenance work; (4) the similarities and differences between the maintenance cultures at the case organizations, and (5) the necessary assessment of the organizational culture in complex sociotechnical systems. Three in-depth case studies were carried out at the maintenance units of three Nordic NPPs. The case studies employed an iterative and multimethod research strategy. The following methods were used: interviews, CULTURE-survey, seminars, document analysis and group work. Both cultural analysis and task modelling were carried out. The results indicate that organizational culture in complex sociotechnical systems can be characterised according to three qualitatively different elements: structure, internal integration and conceptions. All three of these elements of culture as well as their interrelations have to be considered in organizational assessments or important aspects of the organizational dynamics will be overlooked. On the basis of OCT modelling, the maintenance core task was defined as balancing between three critical demands: anticipating the condition of the plant and conducting preventive maintenance accordingly, reacting to unexpected technical faults and monitoring and reflecting on the effects of maintenance actions and the condition of the plant. The results indicate that safety was highly valued at all three plants, and in that sense they all had strong safety cultures. In other respects the cultural features were quite different, and thus the culturally-accepted means of maintaining high safety also differed. The handicraft nature of maintenance work was emphasised as a source of identity at the NPPs. Overall, the importance of safety was taken for granted, but the cultural norms concerning the appropriate means to guarantee it were little reflected. A sense of control, personal responsibility and organizational changes emerged as challenging issues at all the plants. The study shows that in complex sociotechnical systems it is both necessary and possible to analyse the safety and effectiveness of the organizational culture. Safety in complex sociotechnical systems cannot be understood or managed without understanding the demands of the organizational core task and managing the dynamics between the three elements of the organizational culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The juvenile sea squirt wanders through the sea searching for a suitable rock or hunk of coral to cling to and make its home for life. For this task it has a rudimentary nervous system. When it finds its spot and takes root, it doesn't need its brain any more so it eats it. It's rather like getting tenure. Daniel C. Dennett (from Consciousness Explained, 1991) The little sea squirt needs its brain for a task that is very simple and short. When the task is completed, the sea squirt starts a new life in a vegetative state, after having a nourishing meal. The little brain is more tightly structured than our massive primate brains. The number of neurons is exact, no leeway in neural proliferation is tolerated. Each neuroblast migrates exactly to the correct position, and only a certain number of connections with the right companions is allowed. In comparison, growth of a mammalian brain is a merry mess. The reason is obvious: Squirt brain needs to perform only a few, predictable functions, before becoming waste. The more mobile and complex mammals engage their brains in tasks requiring quick adaptation and plasticity in a constantly changing environment. Although the regulation of nervous system development varies between species, many regulatory elements remain the same. For example, all multicellular animals possess a collection of proteoglycans (PG); proteins with attached, complex sugar chains called glycosaminoglycans (GAG). In development, PGs participate in the organization of the animal body, like in the construction of parts of the nervous system. The PGs capture water with their GAG chains, forming a biochemically active gel at the surface of the cell, and in the extracellular matrix (ECM). In the nervous system, this gel traps inside it different molecules: growth factors and ECM-associated proteins. They regulate the proliferation of neural stem cells (NSC), guide the migration of neurons, and coordinate the formation of neuronal connections. In this work I have followed the role of two molecules contributing to the complexity of mammalian brain development. N-syndecan is a transmembrane heparan sulfate proteoglycan (HSPG) with cell signaling functions. Heparin-binding growth-associated molecule (HB-GAM) is an ECM-associated protein with high expression in the perinatal nervous system, and high affinity to HS and heparin. N-syndecan is a receptor for several growth factors and for HB-GAM. HB-GAM induces specific signaling via N-syndecan, activating c-Src, calcium/calmodulin-dependent serine protein kinase (CASK) and cortactin. By studying the gene knockouts of HB-GAM and N-syndecan in mice, I have found that HB-GAM and N-syndecan are involved as a receptor-ligand-pair in neural migration and differentiation. HB-GAM competes with the growth factors fibriblast growth factor (FGF)-2 and heparin-binding epidermal growth factor (HB-EGF) in HS-binding, causing NSCs to stop proliferation and to differentiate, and affects HB-EGF-induced EGF receptor (EGFR) signaling in neural cells during migration. N-syndecan signaling affects the motility of young neurons, by boosting EGFR-mediated cell migration. In addition, these two receptors form a complex at the surface of the neurons, probably creating a motility-regulating structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"The functional organization of auditory cortex (AC) is still poorly understood. Previous studies suggest segregation of auditory processing streams for spatial and nonspatial information located in the posterior and anterior AC, respectively (Rauschecker and Tian, 2000; Arnott et al., 2004; Lomber and Malhotra, 2008). Furthermore, previous studies have shown that active listening tasks strongly modulate AC activations (Petkov et al., 2004; Fritz et al., 2005; Polley et al., 2006). However, the task dependence of AC activations has not been systematically investigated. In the present study, we applied high-resolution functional magnetic resonance imaging of the AC and adjacent areas to compare activations during pitch discrimination and n-back pitch memory tasks that were varied parametrically in difficulty. We found that anterior AC activations were increased during discrimination but not during memory tasks, while activations in the inferior parietal lobule posterior to the AC were enhanced during memory tasks but not during discrimination. We also found that wide areas of the anterior AC and anterior insula were strongly deactivated during the pitch memory tasks. While these results are consistent with the proposition that the anterior and posterior AC belong to functionally separate auditory processing streams, our results show that this division is present also between tasks using spatially invariant sounds. Together, our results indicate that activations of human AC are strongly dependent on the characteristics of the behavioral task."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have presented an overview of the FSIG approach and related FSIG gram- mars to issues of very low complexity and parsing strategy. We ended up with serious optimism according to which most FSIG grammars could be decom- posed in a reasonable way and then processed efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation I study language complexity from a typological perspective. Since the structuralist era, it has been assumed that local complexity differences in languages are balanced out in cross-linguistic comparisons and that complexity is not affected by the geopolitical or sociocultural aspects of the speech community. However, these assumptions have seldom been studied systematically from a typological point of view. My objective is to define complexity so that it is possible to compare it across languages and to approach its variation with the methods of quantitative typology. My main empirical research questions are: i) does language complexity vary in any systematic way in local domains, and ii) can language complexity be affected by the geographical or social environment? These questions are studied in three articles, whose findings are summarized in the introduction to the dissertation. In order to enable cross-language comparison, I measure complexity as the description length of the regularities in an entity; I separate it from difficulty, focus on local instead of global complexity, and break it up into different types. This approach helps avoid the problems that plagued earlier metrics of language complexity. My approach to grammar is functional-typological in nature, and the theoretical framework is basic linguistic theory. I delimit the empirical research functionally to the marking of core arguments (the basic participants in the sentence). I assess the distributions of complexity in this domain with multifactorial statistical methods and use different sampling strategies, implementing, for instance, the Greenbergian view of universals as diachronic laws of type preference. My data come from large and balanced samples (up to approximately 850 languages), drawn mainly from reference grammars. The results suggest that various significant trends occur in the marking of core arguments in regard to complexity and that complexity in this domain correlates with population size. These results provide evidence that linguistic patterns interact among themselves in terms of complexity, that language structure adapts to the social environment, and that there may be cognitive mechanisms that limit complexity locally. My approach to complexity and language universals can therefore be successfully applied to empirical data and may serve as a model for further research in these areas.