15 resultados para CHARGE CONTROL MODEL
em Aston University Research Archive
Resumo:
In construction projects, the aim of project control is to ensure projects finish on time, within budget, and achieve other project objectives. During the last few decades, numerous project control methods have been developed and adopted by project managers in practice. However, many existing methods focus on describing what the processes and tasks of project control are; not on how these tasks should be conducted. There is also a potential gap between principles that underly these methods and project control practice. As a result, time and cost overruns are still common in construction projects, partly attributable to deficiencies of existing project control methods and difficulties in implementing them. This paper describes a new project cost and time control model, the project control and inhibiting factors management (PCIM) model, developed through a study involving extensive interaction with construction practitioners in the UK, which better reflects the real needs of project managers. A set of good practice checklist is also developed to facilitate implementation of the model. © 2013 American Society of Civil Engineers.
Resumo:
How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.
Resumo:
We studied the visual mechanisms that serve to encode spatial contrast at threshold and supra-threshold levels. In a 2AFC contrast-discrimination task, observers had to detect the presence of a vertical 1 cycle deg-1 test grating (of contrast dc) that was superimposed on a similar vertical 1 cycle deg-1 pedestal grating, whereas in pattern masking the test grating was accompanied by a very different masking grating (horizontal 1 cycle deg-1, or oblique 3 cycles deg-1). When expressed as threshold contrast (dc at 75% correct) versus mask contrast (c) our results confirm previous ones in showing a characteristic 'dipper function' for contrast discrimination but a smoothly increasing threshold for pattern masking. However, fresh insight is gained by analysing and modelling performance (p; percent correct) as a joint function of (c, dc) - the performance surface. In contrast discrimination, psychometric functions (p versus logdc) are markedly less steep when c is above threshold, but in pattern masking this reduction of slope did not occur. We explored a standard gain-control model with six free parameters. Three parameters control the contrast response of the detection mechanism and one parameter weights the mask contrast in the cross-channel suppression effect. We assume that signal-detection performance (d') is limited by additive noise of constant variance. Noise level and lapse rate are also fitted parameters of the model. We show that this model accounts very accurately for the whole performance surface in both types of masking, and thus explains the threshold functions and the pattern of variation in psychometric slopes. The cross-channel weight is about 0.20. The model shows that the mechanism response to contrast increment (dc) is linearised by the presence of pedestal contrasts but remains nonlinear in pattern masking.
Resumo:
Control and governance theories recognize that exchange partners are subject to two general forms of control, the unilateral authority of one firm and bilateral expectations extending from their social bond. In this way, a supplier both exerts unilateral, authority-based controls and is subject to socially-based, bilateral controls as it attempts to manage its brand successfully through reseller channels. Such control is being challenged by suppliers’ growing relative dependence on increasingly dominant resellers in many industries. Yet the impact of supplier relative dependence on the efficacy of control-based governance in the supplier’s channel is not well understood. To address this gap, we specify and test a control model moderated by relative dependence involving the conceptualization and measurement of governance at the level of specific control processes: incenting, monitoring, and enforcing. Our empirical findings show relative dependence undercuts the effectiveness of certain unilateral and bilateral control processes while enhancing the effectiveness of others, largely supporting our dual suppositions that each control process operates through a specialized behavioral mechanism and that these underlying mechanisms are differentially impacted by relative dependence. We offer implications of these findings for managers and identify our contributions to channel theory and research.
Resumo:
This thesis reviews the existing manufacturing control techniques and identifies their practical drawbacks when applied in a high variety, low and medium volume environment. It advocates that the significant drawbacks inherent in such systems, could impair their applications under such manufacturing environment. The key weaknesses identified in the system were: capacity insensitive nature of Material Requirements Planning (MRP); the centralised approach to planning and control applied in Manufacturing Resources Planning (MRP IT); the fact that Kanban can only be used in repetitive environments; Optimised Productivity Techniques's (OPT) inability to deal with transient bottlenecks, etc. On the other hand, cellular systems offer advantages in simplifying the control problems of manufacturing and the thesis reviews systems designed for cellular manufacturing including Distributed Manufacturing Resources Planning (DMRP) and Flexible Manufacturing System (FMS) controllers. It advocates that a newly developed cellular manufacturing control methodology, which is fully automatic, capacity sensitive and responsive, has the potential to resolve the core manufacturing control problems discussed above. It's development is envisaged within the framework of a DMRP environment, in which each cell is provided with its own MRP II system and decision making capability. It is a cellular based closed loop control system, which revolves on single level Bill-Of-Materials (BOM) structure and hence provides better linkage between shop level scheduling activities and relevant entries in the MPS. This provides a better prospect of undertaking rapid response to changes in the status of manufacturing resources and incoming enquiries. Moreover, it also permits automatic evaluation of capacity and due date constraints and hence facilitates the automation of MPS within such system. A prototype cellular manufacturing control model, was developed to demonstrate the underlying principles and operational logic of the cellular manufacturing control methodology, based on the above concept. This was shown to offer significant advantages from the prospective of operational planning and control. Results of relevant tests proved that the model is capable of producing reasonable due date and undertake automation of MPS. The overall performance of the model proved satisfactory and acceptable.
Resumo:
The thesis deals with the background, development and description of a mathematical stock control methodology for use within an oil and chemical blending company, where demand and replenishment lead-times are generally non-stationary. The stock control model proper relies on, as input, adaptive forecasts of demand determined for an economical forecast/replenishment period precalculated on an individual stock-item basis. The control procedure is principally that of the continuous review, reorder level type, where the reorder level and reorder quantity 'float', that is, each changes in accordance with changes in demand. Two versions of the Methodology are presented; a cost minimisation version and a service level version. Realising the importance of demand forecasts, four recognised variations of the Trigg and Leach adaptive forecasting routine are examined. A fifth variation, developed, is proposed as part of the stock control methodology. The results of testing the cost minimisation version of the Methodology with historical data, by means of a computerised simulation, are presented together with a description of the simulation used. The performance of the Methodology is in addition compared favourably to a rule-of-thumb approach considered by the Company as an interim solution for reducing stack levels. The contribution of the work to the field of scientific stock control is felt to be significant for the following reasons:- (I) The Methodology is designed specifically for use with non-stationary demand and for this reason alone appears to be unique. (2) The Methodology is unique in its approach and the cost-minimisation version is shown to work successfully with the demand data presented. (3) The Methodology and the thesis as a whole fill an important gap between complex mathematical stock control theory and practical application. A brief description of a computerised order processing/stock monitoring system, designed and implemented as a pre-requisite for the Methodology's practical operation, is presented as an appendix.
Resumo:
In experiments reported elsewhere at this conference, we have revealed two striking results concerning binocular interactions in a masking paradigm. First, at low mask contrasts, a dichoptic masking grating produces a small facilitatory effect on the detection of a similar test grating. Second, the psychometric slope for dichoptic masking starts high (Weibull ß~4) at detection threshold, becomes low (ß~1.2) in the facilitatory region, and then unusually steep at high mask contrasts (ß~5.5). Neither of these results is consistent with Legge's (1984 Vision Research 24 385 - 394) model of binocular summation, but they are predicted by a two-stage gain control model in which interocular suppression precedes binocular summation. Here, we pose a further challenge for this model by using a 'twin-mask' paradigm (cf Foley, 1994 Journal of the Optical Society of America A 11 1710 - 1719). In 2AFC experiments, observers detected a patch of grating (1 cycle deg-1, 200 ms) presented to one eye in the presence of a pedestal in the same eye and a spatially identical mask in the other eye. The pedestal and mask contrasts varied independently, producing a two-dimensional masking space in which the orthogonal axes (10X10 contrasts) represent conventional dichoptic and monocular masking. The resulting surface (100 thresholds) confirmed and extended the observations above, and fixed the six parameters in the model, which fitted the data well. With no adjustment of parameters, the model described performance in a further experiment where mask and test were presented to both eyes. Moreover, in both model and data, binocular summation was greater than a factor of v2 at detection threshold. We conclude that this two-stage nonlinear model, with interocular suppression, gives a good account of early binocular processes in the perception of contrast. [Supported by EPSRC Grant Reference: GR/S74515/01]
Resumo:
The ability to distinguish one visual stimulus from another slightly different one depends on the variability of their internal representations. In a recent paper on human visual-contrast discrimination, Kontsevich et al (2002 Vision Research 42 1771 - 1784) re-considered the long-standing question whether the internal noise that limits discrimination is fixed (contrast-invariant) or variable (contrast-dependent). They tested discrimination performance for 3 cycles deg-1 gratings over a wide range of incremental contrast levels at three masking contrasts, and showed that a simple model with an expansive response function and response-dependent noise could fit the data very well. Their conclusion - that noise in visual-discrimination tasks increases markedly with contrast - has profound implications for our understanding and modelling of vision. Here, however, we re-analyse their data, and report that a standard gain-control model with a compressive response function and fixed additive noise can also fit the data remarkably well. Thus these experimental data do not allow us to decide between the two models. The question remains open. [Supported by EPSRC grant GR/S74515/01]
Resumo:
It is very well known that contrast detection thresholds improve with the size of a grating-type stimulus, but it is thought that the benefit of size is abolished for contrast discriminations well above threshold (e.g., Legge, G. E., & Foley, J. M. (1980)]. Here we challenge the generality of this view. We performed contrast detection and contrast discrimination for circular patches of sine wave grating as a function of stimulus size. We confirm that sensitivity improves with approximately the fourth-root of stimulus area at detection threshold (a log-log slope of -0.25) but find individual differences (IDs) for the suprathreshold discrimination task. For several observers, performance was largely unaffected by area, but for others performance first improved (by as much as a log-log slope of -0.5) and then reached a plateau. We replicated these different results several times on the same observers. All of these results were described in the context of a recent gain control model of area summation [Meese, T. S. (2004)], extended to accommodate the multiple stimulus sizes used here. In this model, (i) excitation increased with the fourth-root of stimulus area for all observers, and (ii) IDs in the discrimination data were described by IDs in the relation between suppression and area. This means that empirical summation in the contrast discrimination task can be attributed to growth in suppression with stimulus size that does not keep pace with the growth in excitation. © 2005 ARVO.
Resumo:
Binocular vision is traditionally treated as two processes: the fusion of similar images, and the interocular suppression of dissimilar images (e.g. binocular rivalry). Recent work has demonstrated that interocular suppression is phase-insensitive, whereas binocular summation occurs only when stimuli are in phase. But how do these processes affect our perception of binocular contrast? We measured perceived contrast using a matching paradigm for a wide range of interocular phase offsets (0–180°) and matching contrasts (2–32%). Our results revealed a complex interaction between contrast and interocular phase. At low contrasts, perceived contrast reduced monotonically with increasing phase offset, by up to a factor of 1.6. At higher contrasts the pattern was non-monotonic: perceived contrast was veridical for in-phase and antiphase conditions, and monocular presentation, but increased a little at intermediate phase angles. These findings challenge a recent model in which contrast perception is phase-invariant. The results were predicted by a binocular contrast gain control model. The model involves monocular gain controls with interocular suppression from positive and negative phase channels, followed by summation across eyes and then across space. Importantly, this model—applied to conditions with vertical disparity—has only a single (zero) disparity channel and embodies both fusion and suppression processes within a single framework.
Resumo:
This thesis will report details of two studies conducted within the National Health Service in the UK that examined the association between HRM practices related to training and appraisal with health outcomes within NHS Trusts. Study one represents the organisational analysis of 61 NHS Trusts, and will report training and appraisal practices were significantly associated with lower patient mortality. Specifically, the research will show significantly lower patient mortality within NHS Trusts that: a) had achieved Investors in People accreditation; b) had a formal strategy document relating to training; c) had tailored training policy documents across occupational groups; d) had integrated training and appraisal practices; e) had a high percentage of staff receiving either an appraisal or updated personal development plan. There was also evidence of an additive effect where NHS Trusts that displayed more of these characteristics had significantly lower patient mortality. Study one in this thesis will also report significantly lower patient mortality within the NHS Trusts where there was broad level representation for the HR function. Study two will report details of a study conducted to examine the potential reasons why HR practices may be related to hospital performance. Details are given of the results of a staff attitudinal survey within five NHS Trusts. This study examined will show that a range of developmental activity, the favourability of the immediate work environment (in relation to social support and role stressors) and motivational outcomes are important antecedents to citizenship behaviours. Furthermore, the thesis will report that principles of the demand-control model were adopted to examine the relationship between workplace support and role stressors, and workplace support, influence, and an understanding of role expectation help mitigate against the negative effects of work demands upon motivational outcomes.
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We have previously shown how such distortions in perceived edge blur may be explained by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. To test this model further, we measured the sharpening of drifting, periodic patterns over a large range of contrasts, blur widths, and speeds Human Vision. The results indicate that, while sharpening increased with speed, it was practically invariant with contrast. This contrast invariance cannot be explained by a fixed compressive nonlinearity since that predicts almost no sharpening at low contrasts.We show by computational modelling of spatiotemporal responses that, if a dynamic contrast gain control precedes the static nonlinear transducer, then motion sharpening, its speed dependence, and its invariance with contrast can be predicted with reasonable accuracy.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT