908 resultados para Network pattern language
Resumo:
Investigated human visual processing of simple two-colour patterns using a delayed match to sample paradigm with positron emission tomography (PET). This study is unique in that the authors specifically designed the visual stimuli to be the same for both pattern and colour recognition with all patterns being abstract shapes not easily verbally coded composed of two-colour combinations. The authors did this to explore those brain regions required for both colour and pattern processing and to separate those areas of activation required for one or the other. 10 right-handed male volunteers aged 18–35 yrs were recruited. The authors found that both tasks activated similar occipital regions, the major difference being more extensive activation in pattern recognition. A right-sided network that involved the inferior parietal lobule, the head of the caudate nucleus, and the pulvinar nucleus of the thalamus was common to both paradigms. Pattern recognition also activated the left temporal pole and right lateral orbital gyrus, whereas colour recognition activated the left fusiform gyrus and several right frontal regions.
Resumo:
This paper extends Appadurai’s notion of “scapes” to delineate what we see as “iScapes”. We contend that iScapes captures the way online technologies shape interactions that invariably filter into offline contexts, giving shape and meaning to human actions and motivations. By drawing on research on high school students’ online activities we examine the flow of iScapes they inhabit in the process of constructing identities and forming social relations.
Resumo:
This chapter describes the nature of human language and introduces theories of language acquisition.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.
Resumo:
Almost all metapopulation modelling assumes that connectivity between patches is only a function of distance, and is therefore symmetric. However, connectivity will not depend only on the distance between the patches, as some paths are easy to traverse, while others are difficult. When colonising organisms interact with the heterogeneous landscape between patches, connectivity patterns will invariably be asymmetric. There have been few attempts to theoretically assess the effects of asymmetric connectivity patterns on the dynamics of metapopulations. In this paper, we use the framework of complex networks to investigate whether metapopulation dynamics can be determined by directly analysing the asymmetric connectivity patterns that link the patches. Our analyses focus on “patch occupancy” metapopulation models, which only consider whether a patch is occupied or not. We propose three easily calculated network metrics: the “asymmetry” and “average path strength” of the connectivity pattern, and the “centrality” of each patch. Together, these metrics can be used to predict the length of time a metapopulation is expected to persist, and the relative contribution of each patch to a metapopulation’s viability. Our results clearly demonstrate the negative effect that asymmetry has on metapopulation persistence. Complex network analyses represent a useful new tool for understanding the dynamics of species existing in fragmented landscapes, particularly those existing in large metapopulations.
Resumo:
In the present paper, we introduce BioPatML.NET, an application library for the Microsoft Windows .NET framework [2] that implements the BioPatML pattern definition language and sequence search engine. BioPatML.NET is integrated with the Microsoft Biology Foundation (MBF) application library [3], unifying the parsers and annotation services supported or emerging through MBF with the language, search framework and pattern repository of BioPatML. End users who wish to exploit the BioPatML.NET engine and repository without engaging the services of a programmer may do so via the freely accessible web-based BioPatML Editor, which we describe below.
Resumo:
In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.
Resumo:
Listening skill is a critical part of language learning in general, and second, and foreign language learning, in particular. However, the process of this basic skill has been overlooked compared to other skills such as speaking, reading and writing, in terms of an explicit instruction, and the product of listening is instead mainly tested indirectly through comprehension questions in classrooms. Instruction of metacognitive strategies demonstrates the pivotal impact on second language listening skill development. In this vein, this study used a mixed method with an experimental male group (N = 30) listened to texts using process-approach pedagogy directed students through metacognitive strategies over a semester (10 weeks) in Iran. To investigate the impact of metacognitive strategy instruction, the following approaches were implemented. First, IELTS listening test was used to track any development of listening comprehension. Second, using Vandergrift et al. (2006) Metacognitive Awareness Listening Questionnaire (MALQ) helped examine students’ use of metacognitive strategies in listening comprehension. Finally, interviews were used to examine students’ use of strategies in listening. The results showed that students had a development in comprehension of IELTS listening test, but no statistical significant development of metacognitive awareness in listening was demonstrated. Students and teacher reported in the interviews students used multiple strategies to approach listening comprehension besides metacognitive strategies.
Resumo:
Decision table and decision rules play an important role in rough set based data analysis, which compress databases into granules and describe the associations between granules. Granule mining was also proposed to interpret decision rules in terms of association rules and multi-tier structure. In this paper, we further extend granule mining to describe the relationships between granules not only by traditional support and confidence, but by diversity and condition diversity as well. Diversity measures how diverse of a granule associated with the other ganules, it provides a kind of novel knowledge in databases. Some experiments are conducted to test the proposed new concepts for describing the characteristics of a real network traffic data collection. The results show that the proposed concepts are promising.
Resumo:
Current English-as-a-second and foreign-language (ESL/EFL) research has encouraged to treat each communicative macroskill separately due to space constraint, but the interrelationship among these skills (listening, speaking, reading, and writing) is not paid due attention. This study attempts to examine first the existing relationship among the four dominant skills, second the potential impact of reading background on the overall language proficiency, and finally the relationship between listening and overall language proficiency as listening is considered an overlooked/passive skill in the pedagogy of the second/foreign language classroom. However, the literature in language learning has revealed that listening skill has salient importance in both first and second language learning. The purpose of this study is to investigate the role of each of four skills in EFL learning and their existing interrelationships in an EFL setting. The outcome of 701 Iranian applicants undertaking International English Language Testing System (IELTS) in Tehran demonstrates that all communicative macroskills have varied correlations from moderate (reading and writing) to high (listening and reading). The findings also show that the applicants’ reading history assisted them in better performing at high stakes tests, and what is more, listening skill was strongly correlated with the overall language proficiency.
Resumo:
In this practice-led research project I work to show how a re-reading and a particular form of listening to the sound-riddled nature of Gertrude Stein's work, Two: Gertrude Stein and her Brother, presents us with a contemporary theory of sound in language. This theory, though in its infancy, is a particular enjambment of sounded language that presents itself as an event, engaged with meaning, with its own inherent voice. It displays a propensity through engagement with the 'other' to erupt into love. In this thesis these qualities are reverberated further through the work of Seth Kim-Cohen's notion of the non-cochlear, Simon Jarvis's notion of musical thinking, Jean-Jacques Lecercle's notion of délire or nonsense, Luce Irigaray's notion of jouissant love and the Bracha Ettinger's notion of the generative matrixial border space. This reading then is simultaneously paired with my own work of scoring and creating a digital opera from Stein's work, thereby testing and performing Stein's theory. In this I show how a re-reading and relistening to Stein's work can be significant to feminist ethical language frames, contemporary philosophy, sonic art theory and digital language frames. Further significance of this study is that when the reverberation of Stein's engagements with language through sound can be listened to, a pattern emerges, one that encouragingly problematizes subjectivity and interweaves genres/methods and means, creating a new frame for sound in language, one with its own voice that I call soundage.
Resumo:
Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the a mission should be aborted due to mechanical or other failure. On-board cameras provide information that can be used in the determination of potential landing sites, which are continually updated and ranked to prevent injury and minimize damage. Pulse Coupled Neural Networks have been used for the detection of features in images that assist in the classification of vegetation and can be used to minimize damage to the aerial vehicle. However, a significant drawback in the use of PCNNs is that they are computationally expensive and have been more suited to off-line applications on conventional computing architectures. As heterogeneous computing architectures are becoming more common, an OpenCL implementation of a PCNN feature generator is presented and its performance is compared across OpenCL kernels designed for CPU, GPU and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images obtained during unmanned aerial vehicle trials to determine the plausibility for real-time feature detection.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.