953 resultados para model complexity
Resumo:
The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.
Resumo:
The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
The early Pliocene warm phase was characterized by high sea surface temperatures and a deep thermocline in the eastern equatorial Pacific. A new hypothesis suggests that the progressive closure of the Panamanian seaway contributed substantially to the termination of this zonally symmetric state in the equatorial Pacific. According to this hypothesis, intensification of the Atlantic meridional overturning circulation (AMOC) - induced by the closure of the gateway - was the principal cause of equatorial Pacific thermocline shoaling during the Pliocene. In this study, twelve Panama seaway sensitivity experiments from eight ocean/climate models of different complexity are analyzed to examine the effect of an open gateway on AMOC strength and thermocline depth. All models show an eastward Panamanian net throughflow, leading to a reduction in AMOC strength compared to the corresponding closed-Panama case. In those models that do not include a dynamic atmosphere, deepening of the equatorial Pacific thermocline appears to scale almost linearly with the throughflow-induced reduction in AMOC strength. Models with dynamic atmosphere do not follow this simple relation. There are indications that in four out of five models equatorial wind-stress anomalies amplify the tropical Pacific thermocline deepening. In summary, the models provide strong support for the hypothesized relationship between Panama closure and equatorial Pacific thermocline shoaling.
Resumo:
A novel two-box model for joint compensation of nonlinear distortion introduced from both in-phase/quadrature modulator and power amplifier is proposed for concurrent dual-band wireless transmitters. Compensation of nonlinear distortion is accomplished in two phases, where phases are identified separately. It is shown that complexity of the digital predistortion is reduced. The performance of the proposed model is evaluated in terms of ACPR, EVM and NMSE improvements using 1.4 MHz LTE and WCDMA signals.
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
Metazoans rely on efficient mechanisms to oppose infections caused by pathogens. The immediate and first-line defense mechanism(s) in metazoans, referred to as the innate immune system, is initiated upon recognition of microbial intruders by germline encoded receptors and is executed by a set of rapid effector mechanisms. Adaptive immunity is restricted to vertebrate species and it is controlled and assisted by the innate immune system. Interestingly, most of the basic signaling cascades that regulate the primeval innate defense mechanism(s) have been well conserved during evolution, for instance between humans and the fruit fly, Drosophila melanogaster. Being devoid of adaptive signaling and effector systems, Drosophila has become an established model system for studying pristine innate immune cascades and reactions. In general, an immune response is evoked when microorganisms pass the fruit fly’s physical barriers (e.g. cuticle, epithelial lining of gut and trachea), and it is mainly executed in the hemolymph, the equivalent of the mammalian blood. Innate immunity in the fruit fly consists of a phenoloxidase (PO) response, a cellular response (hemocytes), an antiviral response, and the NF-κB dependent production of antimicrobial peptides referred to as the humoral response. The JAK/STAT and Jun kinase signaling cascades are also implicated in the defence against pathogens.
Severus Snape : The Complexity and Unconventional Heroism of Severus Snape in the Harry Potter Books
Resumo:
Being an evildoer and being evil is not always the same thing; author J.K Rowling’s character Professor Severus Snape from the Harry Potter series is balancing on that very line. Although being unfair and mean to the protagonist Harry Potter all through the series, Professor Snape is revealed as a hero in the seventh book Harry Potter and the Deathly Hallows (2007). This essay focuses on some of the complex psychological reasons as to why Snape acts the way he does towards Harry and why many readers consider him to be just as great a hero as the protagonist. It argues that his difficult upbringing is the cause of his complexity and the series of books are analyzed from a structuralist perspective, using A.J Greimas’ actantial model and Frank Kermode’s theories about endings and plot twists. Snape’s hate for Harry’s father, caused by years of bullying, is examined as well as his love for Harry’s mother. This essay also discusses in what ways Snape’s change of allegiance, brought on by his eternal love for Harry’s mother, is a great aid in defeating the Dark Lord.
Resumo:
In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework
Resumo:
La possibilité d’estimer l’impact du changement climatique en cours sur le comportement hydrologique des hydro-systèmes est une nécessité pour anticiper les adaptations inévitables et nécessaires que doivent envisager nos sociétés. Dans ce contexte, ce projet doctoral présente une étude sur l’évaluation de la sensibilité des projections hydrologiques futures à : (i) La non-robustesse de l’identification des paramètres des modèles hydrologiques, (ii) l’utilisation de plusieurs jeux de paramètres équifinaux et (iii) l’utilisation de différentes structures de modèles hydrologiques. Pour quantifier l’impact de la première source d’incertitude sur les sorties des modèles, quatre sous-périodes climatiquement contrastées sont tout d’abord identifiées au sein des chroniques observées. Les modèles sont calés sur chacune de ces quatre périodes et les sorties engendrées sont analysées en calage et en validation en suivant les quatre configurations du Different Splitsample Tests (Klemeš, 1986;Wilby, 2005; Seiller et al. (2012);Refsgaard et al. (2014)). Afin d’étudier la seconde source d’incertitude liée à la structure du modèle, l’équifinalité des jeux de paramètres est ensuite prise en compte en considérant pour chaque type de calage les sorties associées à des jeux de paramètres équifinaux. Enfin, pour évaluer la troisième source d’incertitude, cinq modèles hydrologiques de différents niveaux de complexité sont appliqués (GR4J, MORDOR, HSAMI, SWAT et HYDROTEL) sur le bassin versant québécois de la rivière Au Saumon. Les trois sources d’incertitude sont évaluées à la fois dans conditions climatiques observées passées et dans les conditions climatiques futures. Les résultats montrent que, en tenant compte de la méthode d’évaluation suivie dans ce doctorat, l’utilisation de différents niveaux de complexité des modèles hydrologiques est la principale source de variabilité dans les projections de débits dans des conditions climatiques futures. Ceci est suivi par le manque de robustesse de l’identification des paramètres. Les projections hydrologiques générées par un ensemble de jeux de paramètres équifinaux sont proches de celles associées au jeu de paramètres optimal. Par conséquent, plus d’efforts devraient être investis dans l’amélioration de la robustesse des modèles pour les études d’impact sur le changement climatique, notamment en développant les structures des modèles plus appropriés et en proposant des procédures de calage qui augmentent leur robustesse. Ces travaux permettent d’apporter une réponse détaillée sur notre capacité à réaliser un diagnostic des impacts des changements climatiques sur les ressources hydriques du bassin Au Saumon et de proposer une démarche méthodologique originale d’analyse pouvant être directement appliquée ou adaptée à d’autres contextes hydro-climatiques.
Resumo:
International audience
Resumo:
Supply chains are ubiquitous in any commercial delivery systems. The exchange of goods and services, from different supply points to distinct destinations scattered along a given geographical area, requires the management of stocks and vehicles fleets in order to minimize costs while maintaining good quality services. Even if the operating conditions remain constant over a given time horizon, managing a supply chain is a very complex task. Its complexity increases exponentially with both the number of network nodes and the dynamical operational changes. Moreover, the management system must be adaptive in order to easily cope with several disturbances such as machinery and vehicles breakdowns or changes in demand. This work proposes the use of a model predictive control paradigm in order to tackle the above referred issues. The obtained simulation results suggest that this strategy promotes an easy tasks rescheduling in case of disturbances or anticipated changes in operating conditions. © Springer International Publishing Switzerland 2017
Resumo:
The evaluation and identification of habitats that function as nurseries for marine species has the potential to improve conservation and management. A key assessment of nursery habitat is estimating individual growth. However, the discrete growth of crustaceans presents a challenge for many traditional in situ techniques to accurately estimate growth over a short temporal scale. To evaluate the use of nucleic acid ratios (R:D) for juvenile blue crab (Callinectes sapidus), I developed and validated an R:D-based index of growth in the laboratory. R:D based growth estimates of crabs collected in the Patuxent River, MD indicated growth ranged from 0.8-25.9 (mg·g-1·d-1). Overall, there was no effect of size on growth, whereas there was a weak, but significant effect of date. These data provide insight into patterns of habitat-specific growth. These results highlight the complexity of the biological and physical factors which regulate growth of juvenile blue crabs in the field.
Resumo:
The Hybrid Monte Carlo algorithm is adapted to the simulation of a system of classical degrees of freedom coupled to non self-interacting lattices fermions. The diagonalization of the Hamiltonian matrix is avoided by introducing a path-integral formulation of the problem, in d + 1 Euclidean space–time. A perfect action formulation allows to work on the continuum Euclidean time, without need for a Trotter–Suzuki extrapolation. To demonstrate the feasibility of the method we study the Double Exchange Model in three dimensions. The complexity of the algorithm grows only as the system volume, allowing to simulate in lattices as large as 163 on a personal computer. We conclude that the second order paramagnetic–ferromagnetic phase transition of Double Exchange Materials close to half-filling belongs to the Universality Class of the three-dimensional classical Heisenberg model.
Resumo:
It has been recently shown that the double exchange Hamiltonian, with weak antiferromagnetic interactions, has a richer variety of first- and second-order transitions than previously anticipated, and that such transitions are consistent with the magnetic properties of manganites. Here we present a thorough discussion of the variational mean-field approach that leads to these results. We also show that the effect of the Berry phase turns out to be crucial to produce first-order paramagnetic-ferromagnetic transitions near half filling with transition temperatures compatible with the experimental situation. The computation relies on two crucial facts: the use of a mean-field ansatz that retains the complexity of a system of electrons with off-diagonal disorder, not fully taken into account by the mean-field techniques, and the small but significant antiferromagnetic superexchange interaction between the localized spins.