932 resultados para A-PRIORI
Resumo:
This paper proposes that critical realism can provide a useful theoretical foundation to study enterprise architecture (EA) evolution. Specifically it will investigate the practically relevant and academically challenging question of how EAs integrate the Service-oriented Architecture (SOA). Archer’s Morphogenetic theory is used as an analytical approach to distinguish the architectural conditions under which SOA is introduced, to study the relationships between these conditions and SOA introduction, and to reflect on EA evolution (elaborations) that then take place. The focus lies on the reasons why EA evolution takes place (or not) and what architectural changes happen. This paper uses the findings of a literature review to build an a-priori model informed by Archer’s theory to understand EA evolution in a field that often lacks a solid theoretical groundwork. The findings are threefold. First, EA can evolve on different levels (different integration outcomes). Second, the integration outcomes are classified into three levels: business architecture, information systems architecture and technology architecture. Third, the analytical separation using Archer’s theory is helpful in order to understand how these different integration outcomes are generated.
Resumo:
While the method using specialist herbivores in managing invasive plants (classical biological control) is regarded as relatively safe and cost-effective in comparison to other methods of management, the rarity of strict monophagy among insect herbivores illustrates that, like any management option, biological control is not risk-free. The challenge for classical biological control is therefore to predict risks and benefits a priori. In this study we develop a simulation model that may aid in this process. We use this model to predict the risks and benefits of introducing the chrysomelid beetle Charidotis auroguttata to manage the invasive liana Macfadyena unguis-cati in Australia. Preliminary host-specificity testing of this herbivore indicated that there was limited feeding on a non-target plant, although the non-target was only able to sustain some transitions of the life cycle of the herbivore. The model includes herbivore, target and non-target life history and incorporates spillover dynamics of populations of this herbivore from the target to the non-target under a variety of scenarios. Data from studies of this herbivore in the native range and under quarantine were used to parameterize the model and predict the relative risks and benefits of this herbivore when the target and non-target plants co-occur. Key model outputs include population dynamics on target (apparent benefit) and non-target (apparent risk) and fitness consequences to the target (actual benefit) and non-target plant (actual risk) of herbivore damage. The model predicted that risk to the non-target became unacceptable (i.e. significant negative effects on fitness) when the ratio of target to non-target in a given patch ranged from 1:1 to 3:2. By comparing the current known distribution of the non-target and the predicted distribution of the target we were able to identify regions in Australia where the agent may be pose an unacceptable risk. By considering risk and benefit simultaneously, we highlight how such a simulation modelling approach can assist scientists and regulators in making more objective decisions a priori, on the value of releasing specialist herbivores as biological control agents.
Resumo:
In this paper I will offer a novel understanding of a priori knowledge. My claim is that the sharp distinction that is usually made between a priori and a posteriori knowledge is groundless. It will be argued that a plausible understanding of a priori and a posteriori knowledge has to acknowledge that they are in a constant bootstrapping relationship. It is also crucial that we distinguish between a priori propositions that hold in the actual world and merely possible, non-actual a priori propositions, as we will see when considering cases like Euclidean geometry. Furthermore, contrary to what Kripke seems to suggest, a priori knowledge is intimately connected with metaphysical modality, indeed, grounded in it. The task of a priori reasoning, according to this account, is to delimit the space of metaphysically possible worlds in order for us to be able to determine what is actual.
Resumo:
The distinction between a priori and a posteriori knowledge has been the subject of an enormous amount of discussion, but the literature is biased against recognizing the intimate relationship between these forms of knowledge. For instance, it seems to be almost impossible to find a sample of pure a priori or a posteriori knowledge. In this paper it will be suggested that distinguishing between a priori and a posteriori is more problematic than is often suggested, and that a priori and a posteriori resources are in fact used in parallel. We will define this relationship between a priori and a posteriori knowledge as the bootstrapping relationship. As we will see, this relationship gives us reasons to seek for an altogether novel definition of a priori and a posteriori knowledge. Specifically, we will have to analyse the relationship between a priori knowledge and a priori reasoning, and it will be suggested that the latter serves as a more promising starting point for the analysis of aprioricity. We will also analyse a number of examples from the natural sciences and consider the role of a priori reasoning in these examples. The focus of this paper is the analysis of the concepts of a priori and a posteriori knowledge rather than the epistemic domain of a posteriori and a priori justification.
Resumo:
This paper deals with the adaptive mesh generation for singularly perturbed nonlinear parameterized problems with a comparative research study on them. We propose an a posteriori error estimate for singularly perturbed parameterized problems by moving mesh methods with fixed number of mesh points. The well known a priori meshes are compared with the proposed one. The comparison results show that the proposed numerical method is highly effective for the generation of layer adapted a posteriori meshes. A numerical experiment of the error behavior on different meshes is carried out to highlight the comparison of the approximated solutions. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Este trabalho apresenta uma estimativa a priori para o limite superior da distribuição de temperatura considerando um problema em regime permanente em um corpo com uma condutividade térmica dependente da temperatura. A discussão é realizada supondo que as condições de contorno são lineares (lei de Newton do resfriamento) e que a condutividade térmica é constante por partes (quando considerada como uma função da temperatura). Estas estimativas consistem em uma ferramenta poderosa que pode prescindir da necessidade de uma simulação numérica cara de um problema de transferência de calor não linear, sempre que for suficiente conhecer o valor mais alto de temperatura. Nestes casos, a metodologia proposta neste trabalho é mais eficaz do que as aproximações usuais que assumem tanto a condutividade térmica quanto as fontes de calor como constantes.
Resumo:
The paper is based on qualitative properties of the solution of the Navier-Stokes equations for incompressible fluid, and on properties of their finite element solution. In problems with corner-like singularities (e.g. on the well-known L-shaped domain) usually some adaptive strategy is used. In this paper we present an alternative approach. For flow problems on domains with corner singularities we use the a priori error estimates and asymptotic expansion of the solution to derive an algorithm for refining the mesh near the corner. It gives very precise solution in a cheap way. We present some numerical results.
Resumo:
This paper extends a state projection method for structure preserving model reduction to situations where only a weaker notion of system structure is available. This weaker notion of structure, identifying the causal relationship between manifest variables of the system, is especially relevant is settings such as systems biology, where a clear partition of state variables into distinct subsystems may be unknown, or not even exist. The resulting technique, like similar approaches, does not provide theoretical performance guarantees, so an extensive computational study is conducted, and it is observed to work fairly well in practice. Moreover, conditions characterizing structurally minimal realizations and sufficient conditions characterizing edge loss resulting from the reduction process, are presented. ©2009 IEEE.
Resumo:
The performance of algebraic flame surface density (FSD) models has been assessed for flames with nonunity Lewis number (Le) in the thin reaction zones regime, using a direct numerical simulation (DNS) database of freely propagating turbulent premixed flames with Le ranging from 0.34 to 1.2. The focus is on algebraic FSD models based on a power-law approach, and the effects of Lewis number on the fractal dimension D and inner cut-off scale η i have been studied in detail. It has been found that D is strongly affected by Lewis number and increases significantly with decreasing Le. By contrast, η i remains close to the laminar flame thermal thickness for all values of Le considered here. A parameterisation of D is proposed such that the effects of Lewis number are explicitly accounted for. The new parameterisation is used to propose a new algebraic model for FSD. The performance of the new model is assessed with respect to results for the generalised FSD obtained from explicitly LES-filtered DNS data. It has been found that the performance of the most existing models deteriorates with decreasing Lewis number, while the newly proposed model is found to perform as well or better than the most existing algebraic models for FSD. © 2012 Mohit Katragadda et al.
Resumo:
This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences. © 2014 Taylor & Francis.