85 resultados para WEC deployment
Resumo:
Perceptual grouping is a pre-attentive process which serves to group local elements into global wholes, based on shared properties. One effect of perceptual grouping is to distort the ability to estimate the distance between two elements. In this study, biases in distance estimates, caused by four types of perceptual grouping, were measured across three tasks, a perception, a drawing and a construction task in both typical development (TD: Experiment 1) and in individuals with Williams syndrome (WS: Experiment 2). In Experiment 1, perceptual grouping distorted distance estimates across all three tasks. Interestingly, the effect of grouping by luminance was in the opposite direction to the effects of the remaining grouping types. We relate this to differences in the ability to inhibit perceptual grouping effects on distance estimates. Additive distorting influences were also observed in the drawing and the construction task, which are explained in terms of the points of reference employed in each task. Experiment 2 demonstrated that the above distortion effects are also observed in WS. Given the known deficit in the ability to use perceptual grouping in WS, this suggests a dissociation between the pre-attentive influence of and the attentive deployment of perceptual grouping in WS. The typical distortion in relation to drawing and construction points towards the presence of some typical location coding strategies in WS. The performance of the WS group differed from the TD participants on two counts. First, the pattern of overall distance estimates (averaged across interior and exterior distances) across the four perceptual grouping types, differed between groups. Second, the distorting influence of perceptual grouping was strongest for grouping by shape similarity in WS, which contrasts to a strength in grouping by proximity observed in the TD participants. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
The feature model of immediate memory (Nairne, 1990) is applied to an experiment testing individual differences in phonological confusions amongst a group (N=100) of participants performing a verbal memory test. By simulating the performance of an equivalent number of “pseudo-participants” the model fits both the mean performance and the variability within the group. Experimental data show that high-performing individuals are significantly more likely to demonstrate phonological confusions than low performance individuals and this is also true of the model, despite the model’s lack of either an explicit phonological store or a performance-linked strategy shift away from phonological storage. It is concluded that a dedicated phonological store is not necessary to explain the basic phonological confusion effect, and the reduction in such an effect can also be explained without requiring a change in encoding or rehearsal strategy or the deployment of a different storage buffer.
Resumo:
The major technical objectives of the RC-NSPES are to provide a framework for the concurrent operation of reactive and pro-active security functions to deliver efficient and optimised intrusion detection schemes as well as enhanced and highly correlated rule sets for more effective alerts management and root-cause analysis. The design and implementation of the RC-NSPES solution includes a number of innovative features in terms of real-time programmable embedded hardware (FPGA) deployment as well as in the integrated management station. These have been devised so as to deliver enhanced detection of attacks and contextualised alerts against threats that can arise from both the network layer and the application layer protocols. The resulting architecture represents an efficient and effective framework for the future deployment of network security systems.
Resumo:
This paper describes a proposed new approach to the Computer Network Security Intrusion Detection Systems (NIDS) application domain knowledge processing focused on a topic map technology-enabled representation of features of the threat pattern space as well as the knowledge of situated efficacy of alternative candidate algorithms for pattern recognition within the NIDS domain. Thus an integrative knowledge representation framework for virtualisation, data intelligence and learning loop architecting in the NIDS domain is described together with specific aspects of its deployment.
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but what amounts to its essence or ”DNA”. Current approaches show insufficient deployment of three types of knowledge that could be brought to bear in providing a finger printing framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Foci of Interest (FoI) in an image or cross media artefact. Thus our proposed framework aims to deliver selective composite fingerprinting that remains responsive to the requirements for protection of whole or parts of an image which may be of particularly interest and be especially vulnerable to attempts at rights violation. This is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals as well as the inevitably needed market intelligence knowledge such as customers’ social networks interests profiling which we can deploy as a crucial component of our Fingerprinting Collateral Knowledge. This is used in selecting the special FoIs within an image or other media content that have to be selectively and collaterally protected.
Resumo:
Due to its popularity, dense deployments of wireless local area networks (WLANs) are becoming a common feature of many cities around the world. However, with only a limited number of channels available, the problem of increased interference can severely degrade the performance of WLANs if an effective channel assignment scheme is not employed. In an earlier work, we proposed an improved asynchronous distributed and dynamic channel assignment scheme that (1) is simple to implement, (2) does not require any knowledge of the throughput function, and (3) allows asynchronous channel switching by each access point (AP). In this paper, we present extensive performance evaluation of the proposed scheme in practical scenarios found in densely populated WLAN deployments. Specifically, we investigate the convergence behaviour of the scheme and how its performance gains vary with different number of available channels and in different deployment densities. We also prove that our scheme is guaranteed to converge in a single iteration when the number of channels is greater than the number of neighbouring APs.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployment in many cities around the world. The increased interference among different WLANs severely degrades the throughput achievable. This problem has been further exacerbated by the limited number of frequency channels available. An improved distributed and dynamic channel assignment scheme that is simple to implement and does not depend on the knowledge of the throughput function is proposed in this work. It also allows each access point (AP) to asynchronously switch to the new best channel. Simulation results show that our proposed scheme converges much faster than similar previously reported work, with a reduction in convergence time and channel switches as much as 77.3% and 52.3% respectively. When it is employed in dynamic environments, the throughput improves by up to 12.7%.
Resumo:
The deployment of Quality of Service (QoS) techniques involves careful analysis of area including: those business requirements; corporate strategy; and technical implementation process, which can lead to conflict or contradiction between those goals of various user groups involved in that policy definition. In addition long-term change management provides a challenge as these implementations typically require a high-skill set and experience level, which expose organisations to effects such as “hyperthymestria” [1] and “The Seven Sins of Memory”, defined by Schacter and discussed further within this paper. It is proposed that, given the information embedded within the packets of IP traffic, an opportunity exists to augment the traffic management with a machine-learning agent-based mechanism. This paper describes the process by which current policies are defined and that research required to support the development of an application which enables adaptive intelligent Quality of Service controls to augment or replace those policy-based mechanisms currently in use.
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but instead what amounts to its essence or 'DNA'. Current approaches show insufficient deployment of various types of knowledge that could be brought to bear in providing a fingerprinting framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Zones of Interest (ZoI) in an image or cross media artefact. The proposed framework aims to deliver selective composite fingerprinting that is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals and also the inevitably needed market intelligence knowledge such as customers' social networks interests profiling which we can deploy as a crucial component of our fingerprinting collateral knowledge.
Resumo:
Technology-enhanced or Computer Aided Learning (e-learning) can be institutionally integrated and supported by learning management systems or Virtual Learning Environments (VLEs) to offer efficiency gains, effectiveness and scalability of the e-leaning paradigm. However this can only be achieved through integration of pedagogically intelligent approaches and lesson preparation tools environment and VLE that is well accepted by both the students and teachers. This paper critically explores some of the issues relevant to scalable routinisation of e-learning at the tertiary level, typically first year university undergraduates, with the teaching of Relational Data Analysis (RDA), as supported by multimedia authoring, as a case study. The paper concludes that blended learning approaches which balance the deployment of e-learning with other modalities of learning delivery such as instructor–mediated group learning etc offer the most flexible and scalable route to e-learning but that this requires the graceful integration of platforms for multimedia production, distribution and delivery through advanced interactive spaces that provoke learner engagement and promote learning autonomy and group learning facilitated by a cooperative-creative learning environment that remains open to personal exploration of constructivist-constructionist pathways to learning.
Resumo:
This paper examines aspects of the case against global oil peaking, and in particular sets out to answer a viewpoint that the world can have abundant supplies of oil "for years to come". Arguments supporting the latter view include: past forecasts of oil shortage have proved incorrect, so current predictions should also be discounted; many modellers depend on Hubbert's analysis but this contained fundamental flaws; new oil supply will result from reserves growth and from the wider deployment of advanced extraction technology; and that the world contains large resources of unconventional oil that can come on-stream if the production of conventional oil declines. These arguments are examined in turn and shown to be incorrect, or to need setting into a broader context. The paper concludes therefore that such arguments cannot be used to rule out calculations that the resource-limited peak in the world's production of conventional oil will occur in the near term. Moreover, peaking of conventional oil is likely to impact the world's total availability of oil where the latter includes non-conventional oil and oil substitutes. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
The ability to display and inspect powder diffraction data quickly and efficiently is a central part of the data analysis process. Whilst many computer programs are capable of displaying powder data, their focus is typically on advanced operations such as structure solution or Rietveld refinement. This article describes a lightweight software package, Jpowder, whose focus is fast and convenient visualization and comparison of powder data sets in a variety of formats from computers with network access. Jpowder is written in Java and uses its associated Web Start technology to allow ‘single-click deployment’ from a web page, http://www.jpowder.org. Jpowder is open source, free and available for use by anyone.
Resumo:
Although extensively studied within the lidar community, the multiple scattering phenomenon has always been considered a rare curiosity by radar meteorologists. Up to few years ago its appearance has only been associated with two- or three-body-scattering features (e.g. hail flares and mirror images) involving highly reflective surfaces. Recent atmospheric research aimed at better understanding of the water cycle and the role played by clouds and precipitation in affecting the Earth's climate has driven the deployment of high frequency radars in space. Examples are the TRMM 13.5 GHz, the CloudSat 94 GHz, the upcoming EarthCARE 94 GHz, and the GPM dual 13-35 GHz radars. These systems are able to detect the vertical distribution of hydrometeors and thus provide crucial feedbacks for radiation and climate studies. The shift towards higher frequencies increases the sensitivity to hydrometeors, improves the spatial resolution and reduces the size and weight of the radar systems. On the other hand, higher frequency radars are affected by stronger extinction, especially in the presence of large precipitating particles (e.g. raindrops or hail particles), which may eventually drive the signal below the minimum detection threshold. In such circumstances the interpretation of the radar equation via the single scattering approximation may be problematic. Errors will be large when the radiation emitted from the radar after interacting more than once with the medium still contributes substantially to the received power. This is the case if the transport mean-free-path becomes comparable with the instrument footprint (determined by the antenna beam-width and the platform altitude). This situation resembles to what has already been experienced in lidar observations, but with a predominance of wide- versus small-angle scattering events. At millimeter wavelengths, hydrometeors diffuse radiation rather isotropically compared to the visible or near infrared region where scattering is predominantly in the forward direction. A complete understanding of radiation transport modeling and data analysis methods under wide-angle multiple scattering conditions is mandatory for a correct interpretation of echoes observed by space-borne millimeter radars. This paper reviews the status of research in this field. Different numerical techniques currently implemented to account for higher order scattering are reviewed and their weaknesses and strengths highlighted. Examples of simulated radar backscattering profiles are provided with particular emphasis given to situations in which the multiple scattering contributions become comparable or overwhelm the single scattering signal. We show evidences of multiple scattering effects from air-borne and from CloudSat observations, i.e. unique signatures which cannot be explained by single scattering theory. Ideas how to identify and tackle the multiple scattering effects are discussed. Finally perspectives and suggestions for future work are outlined. This work represents a reference-guide for studies focused at modeling the radiation transport and at interpreting data from high frequency space-borne radar systems that probe highly opaque scattering media such as thick ice clouds or precipitating clouds.