993 resultados para Semantic Modeling
Resumo:
A complex attack is a sequence of temporally and spatially separated legal and illegal actions each of which can be detected by various IDS but as a whole they constitute a powerful attack. IDS fall short of detecting and modeling complex attacks therefore new methods are required. This paper presents a formal methodology for modeling and detection of complex attacks in three phases: (1) we extend basic attack tree (AT) approach to capture temporal dependencies between components and expiration of an attack, (2) using enhanced AT we build a tree automaton which accepts a sequence of actions from input message streams from various sources if there is a traversal of an AT from leaves to root, and (3) we show how to construct an enhanced parallel automaton that has each tree automaton as a subroutine. We use simulation to test our methods, and provide a case study of representing attacks in WLANs.
Resumo:
Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.
Resumo:
There are many continuum mechanical models have been developed such as liquid drop models, solid models, and so on for single living cell biomechanics studies. However, these models do not give a fully approach to exhibit a clear understanding of the behaviour of single living cells such as swelling behaviour, drag effect, etc. Hence, the porohyperelastic (PHE) model which can capture those aspects would be a good candidature to study cells behaviour (e.g. chondrocytes in this study). In this research, an FEM model of single chondrocyte cell will be developed by using this PHE model to simulate Atomic Force Microscopy (AFM) experimental results with the variation of strain rate. This material model will be compared with viscoelastic model to demonstrate the advantages of PHE model. The results have shown that the maximum value of force applied of PHE model is lower at lower strain rates. This is because the mobile fluid does not have enough time to exude in case of very high strain rate and also due to the lower permeability of the membrane than that of the protoplasm of chondrocyte. This behavior is barely observed in viscoelastic model. Thus, PHE model is the better model for cell biomechanics studies.
Resumo:
Threats against computer networks evolve very fast and require more and more complex measures. We argue that teams respectively groups with a common purpose for intrusion detection and prevention improve the measures against rapid propagating attacks similar to the concept of teams solving complex tasks known from field of work sociology. Collaboration in this sense is not easy task especially for heterarchical environments. We propose CIMD (collaborative intrusion and malware detection) as a security overlay framework to enable cooperative intrusion detection approaches. Objectives and associated interests are used to create detection groups for exchange of security-related data. In this work, we contribute a tree-oriented data model for device representation in the scope of security. We introduce an algorithm for the formation of detection groups, show realization strategies for the system and conduct vulnerability analysis. We evaluate the benefit of CIMD by simulation and probabilistic analysis.
Resumo:
The destination branding literature emerged as recently as 1998, and there remains a dearth of empirical data that tests the effectiveness of brand campaigns over time. This paper reports the results of an investigation into consumer-based brand equity for Australia as a long haul destination in an emerging South American market. In spite of the high level of academic interest in the measurement of perceptions of destinations since the 1970s, few previous studies have examined perceptions held by South American consumers. Findings suggest that destination brand awareness, brand image, and brand value are positively related to brand loyalty for a long-haul destination. The results also indicate that Australia is a more compelling destination brand for previous visitors compared to non-visitors.
Resumo:
A building information model (BIM) provides a rich representation of a building's design. However, there are many challenges in getting construction-specific information from a BIM, limiting the usability of BIM for construction and other downstream processes. This paper describes a novel approach that utilizes ontology-based feature modeling, automatic feature extraction based on ifcXML, and query processing to extract information relevant to construction practitioners from a given BIM. The feature ontology generically represents construction-specific information that is useful for a broad range of construction management functions. The software prototype uses the ontology to transform the designer-focused BIM into a construction-specific feature-based model (FBM). The formal query methods operate on the FBM to further help construction users to quickly extract the necessary information from a BIM. Our tests demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
The SimCalc Vision and Contributions Advances in Mathematics Education 2013, pp 419-436 Modeling as a Means for Making Powerful Ideas Accessible to Children at an Early Age Richard Lesh, Lyn English, Serife Sevis, Chanda Riggs … show all 4 hide » Look Inside » Get Access Abstract In modern societies in the 21st century, significant changes have been occurring in the kinds of “mathematical thinking” that are needed outside of school. Even in the case of primary school children (grades K-2), children not only encounter situations where numbers refer to sets of discrete objects that can be counted. Numbers also are used to describe situations that involve continuous quantities (inches, feet, pounds, etc.), signed quantities, quantities that have both magnitude and direction, locations (coordinates, or ordinal quantities), transformations (actions), accumulating quantities, continually changing quantities, and other kinds of mathematical objects. Furthermore, if we ask, what kind of situations can children use numbers to describe? rather than restricting attention to situations where children should be able to calculate correctly, then this study shows that average ability children in grades K-2 are (and need to be) able to productively mathematize situations that involve far more than simple counts. Similarly, whereas nearly the entire K-16 mathematics curriculum is restricted to situations that can be mathematized using a single input-output rule going in one direction, even the lives of primary school children are filled with situations that involve several interacting actions—and which involve feedback loops, second-order effects, and issues such as maximization, minimization, or stabilizations (which, many years ago, needed to be postponed until students had been introduced to calculus). …This brief paper demonstrates that, if children’s stories are used to introduce simulations of “real life” problem solving situations, then average ability primary school children are quite capable of dealing productively with 60-minute problems that involve (a) many kinds of quantities in addition to “counts,” (b) integrated collections of concepts associated with a variety of textbook topic areas, (c) interactions among several different actors, and (d) issues such as maximization, minimization, and stabilization.
Resumo:
Finding and labelling semantic features patterns of documents in a large, spatial corpus is a challenging problem. Text documents have characteristics that make semantic labelling difficult; the rapidly increasing volume of online documents makes a bottleneck in finding meaningful textual patterns. Aiming to deal with these issues, we propose an unsupervised documnent labelling approach based on semantic content and feature patterns. A world ontology with extensive topic coverage is exploited to supply controlled, structured subjects for labelling. An algorithm is also introduced to reduce dimensionality based on the study of ontological structure. The proposed approach was promisingly evaluated by compared with typical machine learning methods including SVMs, Rocchio, and kNN.
Resumo:
This work identifies the limitations of n-way data analysis techniques in multidimensional stream data, such as Internet chat room communications data, and establishes a link between data collection and performance of these techniques. Its contributions are twofold. First, it extends data analysis to multiple dimensions by constructing n-way data arrays known as high order tensors. Chat room tensors are generated by a simulator which collects and models actual communication data. The accuracy of the model is determined by the Kolmogorov-Smirnov goodness-of-fit test which compares the simulation data with the observed (real) data. Second, a detailed computational comparison is performed to test several data analysis techniques including svd [1], and multi-way techniques including Tucker1, Tucker3 [2], and Parafac [3].
Resumo:
Previous research on construction innovation has commonly recognized the importance of the organizational climate and key individuals, often called “champions,” for the success of innovation. However, it rarely focuses on the role of participants at the project level and addresses the dynamics of construction innovation. This paper therefore presents a dynamic innovation model that has been developed using the concept of system dynamics. The model incorporates the influence of several individual and situational factors and highlights two critical elements that drive construction innovations: (1) normative pressure created by project managers through their championing behavior, and (2) instrumental motivation of team members facilitated by a supportive organizational climate. The model is qualified empirically, using the results of a survey of project managers and their project team members working for general contractors in Singapore, by assessing casual relationships for key model variables. Finally, the paper discusses the implications of the model structure for fostering construction innovations.
Resumo:
Several approaches have been introduced in the literature for active noise control (ANC) systems. Since the filtered-x least-mean-square (FxLMS) algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of the FxLMS algorithm, as a first novelty. In many ANC applications, an on-line secondary path modeling method using white noise as a training signal is required to ensure convergence of the system. As a second novelty, this paper proposes a new approach for on-line secondary path modeling on the basis of a new variable-step-size (VSS) LMS algorithm in feed forward ANC systems. The proposed algorithm is designed so that the noise injection is stopped at the optimum point when the modeling accuracy is sufficient. In this approach, a sudden change in the secondary path during operation makes the algorithm reactivate injection of the white noise to re-adjust the secondary path estimate. Comparative simulation results shown in this paper indicate the effectiveness of the proposed approach in reducing both narrow-band and broad-band noise. In addition, the proposed ANC system is robust against sudden changes of the secondary path model.
Resumo:
Graphene has promised many novel applications in nanoscale electronics and sustainable energy due to its novel electronic properties. Computational exploration of electronic functionality and how it varies with architecture and doping presently runs ahead of experimental synthesis yet provides insights into types of structures that may prove profitable for targeted experimental synthesis and characterization. We present here a summary of our understanding on the important aspects of dimension, band gap, defect, and interfacial engineering of graphene based on state-of-the-art ab initio approaches. Some most recent experimental achievements relevant for future theoretical exploration are also covered.
Resumo:
Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. Previous research assumed a quantum-like model in which the semantic network was modelled as entangled qubits, however the level of activation was clearly being over-estimated. This paper explores three variations of this model, each of which are distinguished by a scaling factor designed to compensate the overestimation.
Resumo:
The success of contemporary organizations depends on their ability to make appropriate decisions. Making appropriate decisions is inevitably bound to the availability and provision of relevant information. Information systems should be able to provide information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Syperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of specifying effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings. A short example illustrates the usefulness of a conceptual data modeling technique for the specification of information systems.
Resumo:
In a business environment, making the right decisions is vital for the success of a company. Making right decisions is inevitably bound to the availability and provision of relevant information. Information systems are supposed to be able to provide this information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Szyperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of developing effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings.