18 resultados para EQUILATERAL-TRIANGLE

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of the United Kingdom acting as a bridge between Europe and the United States has been a key element in British foreign policy for six decades. Under the second Blair Premiership it reached both its apogee and its nadir. This paper analyses these developments focusing both on the transatlantic and European ends. Particular attention is paid to the failure of the Blair government either to establish a secure place for Britain as a co-leader or to make the British people more comfortable in their European skins. This failure occurred at a period when the EU is characterised by leadership transition and confusion. New leaderships will emerge in the EU over the next two years but it seems unlikely that Britain, characterised by a continuing disconnect between a Euro-sceptic public discourse and deep involvement at a governmental level will develop a European policy narrative that is regarded as convincing at either the EU or domestic level. This weakness is compounded by a failure to develop new thinking about the rise of new powers such as China and India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thrust of this report concerns spline theory and some of the background to spline theory and follows the development in (Wahba, 1991). We also review methods for determining hyper-parameters, such as the smoothing parameter, by Generalised Cross Validation. Splines have an advantage over Gaussian Process based procedures in that we can readily impose atmospherically sensible smoothness constraints and maintain computational efficiency. Vector splines enable us to penalise gradients of vorticity and divergence in wind fields. Two similar techniques are summarised and improvements based on robust error functions and restricted numbers of basis functions given. A final, brief discussion of the application of vector splines to the problem of scatterometer data assimilation highlights the problems of ambiguous solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Original Paper European Journal of Information Systems (2001) 10, 135–146; doi:10.1057/palgrave.ejis.3000394 Organisational learning—a critical systems thinking discipline P Panagiotidis1,3 and J S Edwards2,4 1Deloitte and Touche, Athens, Greece 2Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK Correspondence: Dr J S Edwards, Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK. E-mail: j.s.edwards@aston.ac.uk 3Petros Panagiotidis is Manager responsible for the Process and Systems Integrity Services of Deloitte and Touche in Athens, Greece. He has a BSc in Business Administration and an MSc in Management Information Systems from Western International University, Phoenix, Arizona, USA; an MSc in Business Systems Analysis and Design from City University, London, UK; and a PhD degree from Aston University, Birmingham, UK. His doctorate was in Business Systems Analysis and Design. His principal interests now are in the ERP/DSS field, where he serves as project leader and project risk managment leader in the implementation of SAP and JD Edwards/Cognos in various major clients in the telecommunications and manufacturing sectors. In addition, he is responsible for the development and application of knowledge management systems and activity-based costing systems. 4John S Edwards is Senior Lecturer in Operational Research and Systems at Aston Business School, Birmingham, UK. He holds MA and PhD degrees (in mathematics and operational research respectively) from Cambridge University. His principal research interests are in knowledge management and decision support, especially methods and processes for system development. He has written more than 30 research papers on these topics, and two books, Building Knowledge-based Systems and Decision Making with Computers, both published by Pitman. Current research work includes the effect of scale of operations on knowledge management, interfacing expert systems with simulation models, process modelling in law and legal services, and a study of the use of artifical intelligence techniques in management accounting. Top of pageAbstract This paper deals with the application of critical systems thinking in the domain of organisational learning and knowledge management. Its viewpoint is that deep organisational learning only takes place when the business systems' stakeholders reflect on their actions and thus inquire about their purpose(s) in relation to the business system and the other stakeholders they perceive to exist. This is done by reflecting both on the sources of motivation and/or deception that are contained in their purpose, and also on the sources of collective motivation and/or deception that are contained in the business system's purpose. The development of an organisational information system that captures, manages and institutionalises meaningful information—a knowledge management system—cannot be separated from organisational learning practices, since it should be the result of these very practices. Although Senge's five disciplines provide a useful starting-point in looking at organisational learning, we argue for a critical systems approach, instead of an uncritical Systems Dynamics one that concentrates only on the organisational learning practices. We proceed to outline a methodology called Business Systems Purpose Analysis (BSPA) that offers a participatory structure for team and organisational learning, upon which the stakeholders can take legitimate action that is based on the force of the better argument. In addition, the organisational learning process in BSPA leads to the development of an intrinsically motivated information organisational system that allows for the institutionalisation of the learning process itself in the form of an organisational knowledge management system. This could be a specific application, or something as wide-ranging as an Enterprise Resource Planning (ERP) implementation. Examples of the use of BSPA in two ERP implementations are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We used magnetoencephalography (MEG) to examine the nature of oscillatory brain rhythms when passively viewing both illusory and real visual contours. Three stimuli were employed: a Kanizsa triangle; a Kanizsa triangle with a real triangular contour superimposed; and a control figure in which the corner elements used to form the Kanizsa triangle were rotated to negate the formation of illusory contours. The MEG data were analysed using synthetic aperture magnetometry (SAM) to enable the spatial localisation of task-related oscillatory power changes within specific frequency bands, and the time-course of activity within given locations-of-interest was determined by calculating time-frequency plots using a Morlet wavelet transform. In contrast to earlier studies, we did not find increases in gamma activity (> 30 Hz) to illusory shapes, but instead a decrease in 10–30 Hz activity approximately 200 ms after stimulus presentation. The reduction in oscillatory activity was primarily evident within extrastriate areas, including the lateral occipital complex (LOC). Importantly, this same pattern of results was evident for each stimulus type. Our results further highlight the importance of the LOC and a network of posterior brain regions in processing visual contours, be they illusory or real in nature. The similarity of the results for both real and illusory contours, however, leads us to conclude that the broadband (< 30 Hz) decrease in power we observed is more likely to reflect general changes in visual attention than neural computations specific to processing visual contours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (1st derivative) filter, or as zero-crossings in the 2nd derivative (ZCs). We tested those ideas using a stimulus that has no local peaks of gradient and no ZCs, at any scale. The stimulus profile is analogous to the Mach ramp, but it is the luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux; the luminance profile is a blurred triangle-wave. For all image-blurs tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These Mach edges correspond to peaks and troughs in the 3rd derivative. Thus Mach edges are inconsistent with many standard edge-detection schemes, but are nicely predicted by a recent model that finds edge points with a 2-stage sequence of 1st then 2nd derivative operators, each followed by a half-wave rectifier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Feature detection is a crucial stage of visual processing. In previous feature-marking experiments we found that peaks in the 3rd derivative of the luminance profile can signify edges where there are no 1st derivative peaks nor 2nd derivative zero-crossings (Wallis and George 'Mach edges' (the edges of Mach bands) were nicely predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test of the model, we now use a new class of stimuli, formed by adding a linear luminance ramp to the blurred triangle waves used previously. The ramp has no effect on the second or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing only one edge as the added ramp gradient increases. In experiment 1, subjects judged whether one or two edges were visible on each trial. In experiment 2, subjects used a cursor to mark perceived edges and bars. The position and polarity of the marked edges were close to model predictions. Both experiments produced the predicted shift from two to one Mach edge, but the shift was less complete than predicted. We conclude that the model is a useful predictor of edge perception, but needs some modification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Edge detection is crucial in visual processing. Previous computational and psychophysical models have often used peaks in the gradient or zero-crossings in the 2nd derivative to signal edges. We tested these approaches using a stimulus that has no such features. Its luminance profile was a triangle wave, blurred by a rectangular function. Subjects marked the position and polarity of perceived edges. For all blur widths tested, observers marked edges at or near 3rd derivative maxima, even though these were not 1st derivative maxima or 2nd derivative zero-crossings, at any scale. These results are predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test, we added a ramp of variable slope to the blurred triangle-wave luminance profile. The ramp has no effect on the (linear) 2nd or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing one edge as the ramp gradient increases. Results of two experiments confirmed such a shift, thus supporting the new model. [Supported by the Engineering and Physical Sciences Research Council].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (first-derivative) filter, or as zero-crossings (ZCs) in the second-derivative. A variety of multi-scale models are based on this idea. We tested this approach by devising a stimulus that has no local peaks of gradient and no ZCs, at any scale. Our stimulus profile is analogous to the classic Mach-band stimulus, but it is the local luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux. The luminance profile is a smoothed triangle wave and is obtained by integrating the gradient profile. Subjects used a cursor to mark the position and polarity of perceived edges. For all the ramp-widths tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These new Mach edges correspond to peaks and troughs in the third-derivative. They are analogous to Mach bands - light and dark bars are seen where there are no luminance peaks but there are peaks in the second derivative. Here, peaks in the third derivative were seen as light-to-dark edges, troughs as dark-to-light edges. Thus Mach edges are inconsistent with many standard edge detectors, but are nicely predicted by a new model that uses a (nonlinear) third-derivative operator to find edge points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Gestalt theorists of the early twentieth century proposed a psychological primacy for circles, squares and triangles over other shapes. They described them as 'good' shapes and the Gestalt premise has been widely accepted. Rosch (1973), for example, suggested that shape categories formed around these 'natural' prototypes irrespective of the paucity of shape terms in a language. Rosch found that speakers of a language lacking terms for any geometric shape nevertheless learnt paired-associates to these 'good' shapes more easily than to asymmetric variants. We question these empirical data in the light of the accumulation of recent evidence in other perceptual domains that language affects categorization. A cross-cultural investigation sought to replicate Rosch's findings with the Himba of Northern Namibia who also have no terms in their language for the supposedly basic shapes of circle, square and triangle. A replication of Rosch (1973) found no advantage for these 'good' shapes in the organization of categories. It was concluded that there is no necessary salience for circles, squares and triangles. Indeed, we argue for the opposite because these shapes are rare in nature. The general absence of straight lines and symmetry in the perceptual environment should rather make circles, squares and triangles unusual and, therefore, less likely to be used as prototypes in categorization tasks. We place shape as one of the types of perceptual input (in philosophical terms, 'vague') that is readily susceptible to effects of language variation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examined the effects on extinction of grouping by collinearity of edges and grouping by alignment of internal axes of shapes, in a patient (GK) with simultanagnosia following bilateral parietal brain damage. GK’s visual extinction was reduced when items (equilateral triangles and angles) could be grouped by base alignment (i.e., collinearity) or by axis alignment, relative to a condition in which items were ungrouped. These grouping effects disappeared when inter-item spacing was increased, though factors such as display symmetry remained constant. Overall, the results suggest that, under some conditions, grouping by alignment of axes of symmetry can have an equal beneficial effect on visual extinction as edge-based grouping; thus, in the extinguished field, there is derivation of axis-based representations from the contours present.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The literature relating to haze formation, methods of separation, coalescence mechanisms, and models by which droplets <100 μm are collected, coalesced and transferred, have been reviewed with particular reference to particulate bed coalescers. The separation of secondary oil-water dispersions was studied experimentally using packed beds of monosized glass ballotini particles. The variables investigated were superficial velocity, bed depth, particle size, and the phase ratio and drop size distribution of inlet secondary dispersion. A modified pump loop was used to generate secondary dispersions of toluene or Clairsol 350 in water with phase ratios between 0.5-6.0 v/v%.Inlet drop size distributions were determined using a Malvern Particle Size Analyser;effluent, coalesced droplets were sized by photography. Single phase flow pressure drop data were correlated by means of a Carman-Kozeny type equation. Correlations were obtained relating single and two phase pressure drops, as (ΔP2/μc)/ΔP1/μd) = kp Ua Lb dcc dpd Cine A flow equation was derived to correlate the two phase pressure drop data as, ΔP2/(ρcU2) = 8.64*107 [dc/D]-0.27 [L/D]0.71 [dp/D]-0.17 [NRe]1.5 [e1]-0.14 [Cin]0.26  In a comparison between functions to characterise the inlet drop size distributions a modification of the Weibull function provided the best fit of experimental data. The general mean drop diameter was correlated by: q_p q_p p_q /β      Γ ((q-3/β) +1) d qp = d fr  .α        Γ ((P-3/β +1 The measured and predicted mean inlet drop diameters agreed within ±15%. Secondary dispersion separation depends largely upon drop capture within a bed. A theoretical analysis of drop capture mechanisms in this work indicated that indirect interception and London-van der Waal's mechanisms predominate. Mathematical models of dispersed phase concentration m the bed were developed by considering drop motion to be analogous to molecular diffusion.The number of possible channels in a bed was predicted from a model in which the pores comprised randomly-interconnected passage-ways between adjacent packing elements and axial flow occured in cylinders on an equilateral triangular pitch. An expression was derived for length of service channels in a queuing system leading to the prediction of filter coefficients. The insight provided into the mechanisms of drop collection and travel, and the correlations of operating parameters, should assist design of industrial particulate bed coalescers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A mathematical model is developed for the general pneumatic tyre. The model will permit the investigations of tyre deformations produced by arbitrary external loading, and will enable estimates to be made of the distributions of applied and reactive forces. The principle of Finite Elements is used to idealise the composite tyre structure, each element consisting of a triangle of double curvature with varying thickness. Large deflections of' the structure are accomodated by the use of an iterative sequence of small incremental steps, each of' which obeys the laws of linear mechanics. The theoretical results are found to compare favourably with the experimental test data obtained from two different types of ttye construction. However, limitations in the discretisation process has prohibited accurate assessments to be made of stress distributions in the regions of high stress gradients ..

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Influential models of edge detection have generally supposed that an edge is detected at peaks in the 1st derivative of the luminance profile, or at zero-crossings in the 2nd derivative. However, when presented with blurred triangle-wave images, observers consistently marked edges not at these locations, but at peaks in the 3rd derivative. This new phenomenon, termed ‘Mach edges’ persisted when a luminance ramp was added to the blurred triangle-wave. Modelling of these Mach edge detection data required the addition of a physiologically plausible filter, prior to the 3rd derivative computation. A viable alternative model was examined, on the basis of data obtained with short-duration, high spatial-frequency stimuli. Detection and feature-making methods were used to examine the perception of Mach bands in an image set that spanned a range of Mach band detectabilities. A scale-space model that computed edge and bar features in parallel provided a better fit to the data than 4 competing models that combined information across scale in a different manner, or computed edge or bar features at a single scale. The perception of luminance bars was examined in 2 experiments. Data for one image-set suggested a simple rule for perception of a small Gaussian bar on a larger inverted Gaussian bar background. In previous research, discriminability (d’) has typically been reported to be a power function of contrast, where the exponent (p) is 2 to 3. However, using bar, grating, and Gaussian edge stimuli, with several methodologies, values of p were obtained that ranged from 1 to 1.7 across 6 experiments. This novel finding was explained by appealing to low stimulus uncertainty, or a near-linear transducer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. In a series of experiments, perceptual organisation was probed by presenting three-formant (F1+F2+F3) analogues of target sentences dichotically, together with a competitor for F2 (F2C), or for F2+F3, which listeners must reject to optimise recognition. To control for energetic masking, the competitor was always presented in the opposite ear to the corresponding target formant(s). Sine-wave speech was used initially, and different versions of F2C were derived from F2 using separate manipulations of its amplitude and frequency contours. F2Cs with time-varying frequency contours were highly effective competitors, whatever their amplitude characteristics, whereas constant-frequency F2Cs were ineffective. Subsequent studies used synthetic-formant speech to explore the effects of manipulating the rate and depth of formant-frequency change in the competitor. Competitor efficacy was not tuned to the rate of formant-frequency variation in the target sentences; rather, the reduction in intelligibility increased with competitor rate relative to the rate for the target sentences. Therefore, differences in speech rate may not be a useful cue for separating the speech of concurrent talkers. Effects of competitors whose depth of formant-frequency variation was scaled by a range of factors were explored using competitors derived either by inverting the frequency contour of F2 about its geometric mean (plausibly speech-like pattern) or by using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Competitor efficacy depended on the overall depth of frequency variation, not depth relative to that for the other formants. Furthermore, the triangle-wave competitors were as effective as their more speech-like counterparts. Overall, the results suggest that formant-frequency variation is critical for the across-frequency grouping of formants but that this grouping does not depend on speech-specific constraints.