35 resultados para heory of constraints
em CentAUR: Central Archive University of Reading - UK
Resumo:
The formulation of four-dimensional variational data assimilation allows the incorporation of constraints into the cost function which need only be weakly satisfied. In this paper we investigate the value of imposing conservation properties as weak constraints. Using the example of the two-body problem of celestial mechanics we compare weak constraints based on conservation laws with a constraint on the background state.We show how the imposition of conservation-based weak constraints changes the nature of the gradient equation. Assimilation experiments demonstrate how this can add extra information to the assimilation process, even when the underlying numerical model is conserving.
Resumo:
Adaptive methods which “equidistribute” a given positive weight function are now used fairly widely for selecting discrete meshes. The disadvantage of such schemes is that the resulting mesh may not be smoothly varying. In this paper a technique is developed for equidistributing a function subject to constraints on the ratios of adjacent steps in the mesh. Given a weight function $f \geqq 0$ on an interval $[a,b]$ and constants $c$ and $K$, the method produces a mesh with points $x_0 = a,x_{j + 1} = x_j + h_j ,j = 0,1, \cdots ,n - 1$ and $x_n = b$ such that\[ \int_{xj}^{x_{j + 1} } {f \leqq c\quad {\text{and}}\quad \frac{1} {K}} \leqq \frac{{h_{j + 1} }} {{h_j }} \leqq K\quad {\text{for}}\, j = 0,1, \cdots ,n - 1 . \] A theoretical analysis of the procedure is presented, and numerical algorithms for implementing the method are given. Examples show that the procedure is effective in practice. Other types of constraints on equidistributing meshes are also discussed. The principal application of the procedure is to the solution of boundary value problems, where the weight function is generally some error indicator, and accuracy and convergence properties may depend on the smoothness of the mesh. Other practical applications include the regrading of statistical data.
Resumo:
Using the eye-movement monitoring technique in two reading comprehension experiments, we investigated the timing of constraints on wh-dependencies (so-called ‘island’ constraints) in native and nonnative sentence processing. Our results show that both native and nonnative speakers of English are sensitive to extraction islands during processing, suggesting that memory storage limitations affect native and nonnative comprehenders in essentially the same way. Furthermore, our results show that the timing of island effects in native compared to nonnative sentence comprehension is affected differently by the type of cue (semantic fit versus filled gaps) signalling whether dependency formation is possible at a potential gap site. Whereas English native speakers showed immediate sensitivity to filled gaps but not to lack of semantic fit, proficient German-speaking learners of L2 English showed the opposite sensitivity pattern. This indicates that initial wh-dependency formation in nonnative processing is based on semantic feature-matching rather than being structurally mediated as in native comprehension.
Resumo:
We report findings from psycholinguistic experiments investigating the detailed timing of processing morphologically complex words by proficient adult second (L2) language learners of English in comparison to adult native (L1) speakers of English. The first study employed the masked priming technique to investigate -ed forms with a group of advanced Arabic-speaking learners of English. The results replicate previously found L1/L2 differences in morphological priming, even though in the present experiment an extra temporal delay was offered after the presentation of the prime words. The second study examined the timing of constraints against inflected forms inside derived words in English using the eye-movement monitoring technique and an additional acceptability judgment task with highly advanced Dutch L2 learners of English in comparison to adult L1 English controls. Whilst offline the L2 learners performed native-like, the eye-movement data showed that their online processing was not affected by the morphological constraint against regular plurals inside derived words in the same way as in native speakers. Taken together, these findings indicate that L2 learners are not just slower than native speakers in processing morphologically complex words, but that the L2 comprehension system employs real-time grammatical analysis (in this case, morphological information) less than the L1 system.
Resumo:
Computational formalisms have been pushing the boundaries of the field of computing for the last 80 years and much debate has surrounded what computing entails; what it is, and what it is not. This paper seeks to explore the boundaries of the ideas of computation and provide a framework for enabling a constructive discussion of computational ideas. First, a review of computing is given, ranging from Turing Machines to interactive computing. Then, a variety of natural physical systems are considered for their computational qualities. From this exploration, a framework is presented under which all dynamical systems can be considered as instances of the class of abstract computational platforms. An abstract computational platform is defined by both its intrinsic dynamics and how it allows computation that is meaningful to an external agent through the configuration of constraints upon those dynamics. It is asserted that a platform’s computational expressiveness is directly related to the freedom with which constraints can be placed. Finally, the requirements for a formal constraint description language are considered and it is proposed that Abstract State Machines may provide a reasonable basis for such a language.
Resumo:
The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.
Resumo:
This thesis considers Participatory Crop Improvement (PCI) methodologies and examines the reasons behind their continued contestation and limited mainstreaming in conventional modes of crop improvement research within National Agricultural Research Systems (NARS). In particular, it traces the experiences of a long-established research network with over 20 years of experience in developing and implementing PCI methods across South Asia, and specifically considers its engagement with the Indian NARS and associated state-level agricultural research systems. In order to address the issues surrounding PCI institutionalisation processes, a novel conceptual framework was derived from a synthesis of the literatures on Strategic Niche Management (SNM) and Learning-based Development Approaches (LBDA) to analyse the socio-technical processes and structures which constitute the PCI ‘niche’ and NARS ‘regime’. In examining the niche and regime according to their socio-technical characteristics, the framework provides explanatory power for understanding the nature of their interactions and the opportunities and barriers that exist with respect to the translation of lessons and ideas between niche and regime organisations. The research shows that in trying to institutionalise PCI methods and principles within NARS in the Indian context, PCI proponents have encountered a number of constraints related to the rigid and hierarchical structure of the regime organisations; the contractual mode of most conventional research, which inhibits collaboration with a wider group of stakeholders; and the time-limited nature of PCI projects themselves, which limits investment and hinders scaling up of the innovations. It also reveals that while the niche projects may be able to induce a ‘weak’ form of PCI institutionalisation within the Indian NARS by helping to alter their institutional culture to be more supportive of participatory plant breeding approaches and future collaboration with PCI researchers, a ‘strong’ form of PCI institutionalisation, in which NARS organisations adopt participatory methodologies to address all their crop improvement agenda, is likely to remain outside of the capacity of PCI development projects to deliver.
Resumo:
Design management research usually deals with the processes within the professional design team and yet, in the UK, the volume of the total project information produced by the specialist trade contractors equals or exceeds that produced by the design team. There is a need to understand the scale of this production task and to plan and manage it accordingly. The model of the process on which the plan is to be based, while generic, must be sufficiently robust to cover the majority of instances. An approach using design elements, in sufficient depth to possibly develop tools for a predictive model of the process, is described. The starting point is that each construction element and its components have a generic sequence of design activities. Specific requirements tailor the element's application to the building. Then there are the constraints produced due to the interaction with other elements. Therefore, the selection of a component within the element may impose a set of constraints that will affect the choice of other design elements. Thus, a design decision can be seen as an interrelated element-constraint-element (ECE) sub-net. To illustrate this approach, an example of the process within precast concrete cladding has been used.
Resumo:
Design management research usually deals with the processes within the professional design team and yet, in the UK, the volume of the total project information produced by the specialist trade contractors equals or exceeds that produced by the design team. There is a need to understand the scale of this production task and to plan and manage it accordingly. The model of the process on which the plan is to be based, while generic, must be sufficiently robust to cover the majority of instances. An approach using design elements, in sufficient depth to possibly develop tools for a predictive model of the process, is described. The starting point is that each construction element and its components have a generic sequence of design activities. Specific requirements tailor the element's application to the building. Then there are the constraints produced due to the interaction with other elements. Therefore, the selection of a component within the element may impose a set of constraints that will affect the choice of other design elements. Thus, a design decision can be seen as an interrelated element-constraint-element (ECE) sub-net. To illustrate this approach, an example of the process within precast concrete cladding has been used.
Resumo:
This is one of the first papers in which arguments are given to treat code-switching and borrowing as similar phenomena. It is argued that it is theoretically undesirable to distinguish both phenomena, and empirically very problematic. A probabilistic account of code-switching and a hierarchy of switched constituents (similar to hierarchies of borrowability) are proposed which account for the fact that some constituents are more likely to be borrowed/switched than others. It is argued that the same kinds of constraints apply to both code-switching and borrowing.
Resumo:
Recent attempts to problematize archaeological fieldwork concerned with excavation at the expense of surface survey, and with questions of procedure more than interpretations of the past. In fact these two kinds of fieldwork offer quite different possibilities and suffer from different constraints. Thought must be given to ways in which they can be combined if they are to make a real contribution to social archaeology. The argument is illustrated by a project carried out at a megalithic cemetery in Scotland.
Resumo:
This paper investigates the impact of aerosol forcing uncertainty on the robustness of estimates of the twentieth-century warming attributable to anthropogenic greenhouse gas emissions. Attribution analyses on three coupled climate models with very different sensitivities and aerosol forcing are carried out. The Third Hadley Centre Coupled Ocean - Atmosphere GCM (HadCM3), Parallel Climate Model (PCM), and GFDL R30 models all provide good simulations of twentieth-century global mean temperature changes when they include both anthropogenic and natural forcings. Such good agreement could result from a fortuitous cancellation of errors, for example, by balancing too much ( or too little) greenhouse warming by too much ( or too little) aerosol cooling. Despite a very large uncertainty for estimates of the possible range of sulfate aerosol forcing obtained from measurement campaigns, results show that the spatial and temporal nature of observed twentieth-century temperature change constrains the component of past warming attributable to anthropogenic greenhouse gases to be significantly greater ( at the 5% level) than the observed warming over the twentieth century. The cooling effects of aerosols are detected in all three models. Both spatial and temporal aspects of observed temperature change are responsible for constraining the relative roles of greenhouse warming and sulfate cooling over the twentieth century. This is because there are distinctive temporal structures in differential warming rates between the hemispheres, between land and ocean, and between mid- and low latitudes. As a result, consistent estimates of warming attributable to greenhouse gas emissions are obtained from all three models, and predictions are relatively robust to the use of more or less sensitive models. The transient climate response following a 1% yr(-1) increase in CO2 is estimated to lie between 2.2 and 4 K century(-1) (5-95 percentiles).