897 resultados para information bottleneck method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Histograms have been used for Shape Representation and Retrieval. In this paper, the traditional technique has been modified to capture additional information. We compare the performance of the proposed method with the traditional method by performing experiments on a database of shapes. The results show that the proposed enhancement to the histogram based method improves the effectiveness significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of Kanban-based production control systems may be difficult in make-to-order environments such as job shops. The flexible manufacturing approach constitutes a promising solution to adapt the Kanban method to such environments. This paper presents an information flow modelling approach for specifying the operational planning and control functions of the Kanban-controlled shopfloor control system (KSCS) in a flexible manufacturing environment. By decomposing the KSCS control functionalities, we have created the system information flow model through the data flow diagrams of Structured Systems Analysis Methodology. The data flow diagrams serve effective system specifications for communicating the system operations to participants of different disciplines as well as the system model for the design and development of KSCS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Critical Information Infrastructure (CII) has become a priority for all levels of management, It is one of the key components of efficient business and business continuity plans. There is a need for a new security methodology to deal with the new and unique attack threats and vulnerabilities associated with the new information technology security paradigm. Critical Information Infrastructure Protection - Risk Analysis Methodology
(ClIP-RAM), is a new security risk analysis method which copes with the shift from computer/information security to critical information infrastructure protection. This type of methodology is the next step toward handling information technology security risk at all levels from upper management information security down to firewall configurations. The paper will present the methodology of the new techniques and their application to critical information infrastructure protection. The associated advantages of this methodology will also be discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building demolition has been undergoing evolutionary development in its technologies for several decades. In order to achieve a high level of demolition material reuse and recycling, new management approaches are also necessitated, in particular in conjunction with the applications of information technologies. The development of an information system for demolition project management is an impactful strategy to support various demolition activities including waste exchange, demolition visualization, and demolition method selection and evaluation. This paper aims to develop a framework of an integrated information system for building demolition project demolition decision-making and waste minimization. The components of this information system and their interactions are demonstrated through a specifical demolition project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information in construction industry is delivered and interpreted in a language specific to the industry in which large complex objects are only partially described and with much information being implicit in the language used. Successful communication therefore relies on participants in the industry leaming how to interpret the language through many years of education, training and experience. With the introduction of computer technology, and in particular the detailed digital building information model (DB 1M), the accepted language currently in use is no longer a valid method of describing the building. At all stages in the paper based design and documentation process it is generally readily apparent which parts of the design require further completion and which are fully resolved. This is able to be achieved through the complex graphical language currently in use. In the DBIM, all information appears at the same level of resolution making difficult the interpretation of implicit information embedded in the model. This compromises the collaborative design environment which is being described as a fundamental characteristic of the future construction industry. This paper focuses on two areas. The first analyses design resolution and the role uncertain information plays in the design process. It then discusses the manner in which designers and the industry in general deal with incomplete or unresolved information. The second describes a theoretical model in which a design resolution (DR) environment incorporates the level of design resolution as an operable element in a collaborative DBIM. The development and implementation of this model will allow designers to better share, understand and interpret design knowledge from the shared information during the various stages of digital design and before full resolution is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web-based self-service has emerged as an important strategy for providing pre- and post-sales customer support. Yet, there is a dearth of theoretical or empirical research concerning the organisational, customer-oriented, knowledge-based, and employee-oriented factors that enable web-based self-service systems (WSS) to be successful in a competitive global marketplace. In this paper, we describe and discuss findings from the first phase of a multi-method research study designed to address this literature gap. This study explores critical success factors (CSFs) involved in the transfer of support-oriented knowledge from an information technology (IT) services firm to commercial customers when WSS are employed. Empirical data collected in a CSF study of a large multinational IT services business are used to identify twenty-six critical success factors. The findings indicate that best-in-class IT service providers are aware of a range of critical success factors in the transfer to commercial customers of resolutions and other support-oriented knowledge via WSS. However, such firms remain less certain about what is needed to support customer companies after support-oriented knowledge has initially been transferred to the customer firm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of participational approaches in system design have been debated for a number of years. Within this paper we describe a method that was used to effectively design information systems and implement information security countermeasures within a health care environment. The paper shows how it was used in a number of different environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clustering is widely used in bioinformatics to find gene correlation patterns. Although many algorithms have been proposed, these are usually confronted with difficulties in meeting the requirements of both automation and high quality. In this paper, we propose a novel algorithm for clustering genes from their expression profiles. The unique features of the proposed algorithm are twofold: it takes into consideration global, rather than local, gene correlation information in clustering processes; and it incorporates clustering quality measurement into the clustering processes to implement non-parametric, automatic and global optimal gene clustering. The evaluation on simulated and real gene data sets demonstrates the effectiveness of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: For successful prosecution of child sexual abuse, children are often required to provide reports about individual, alleged incidents. Although verbally or mentally rehearsing memory of an incident can strengthen memories, children’s report of individual incidents can also be contaminated when they experience other events related to the individual incidents (e.g., informal interviews, dreams of the incident) and/or when they have similar, repeated experiences of an incident, as in cases of multiple abuse.

Method: Research is reviewed on the positive and negative effects of these related experiences on the length, accuracy, and structure of children’s reports of a particular incident.

Results: Children’s memories of a particular incident can be strengthened when exposed to information that does not contradict what they have experienced, thus promoting accurate recall and resistance to false, suggestive influences. When the encountered information differs from children’s experiences of the target incident, however, children can become confused between their experiences—they may remember the content but not the source of their experiences.

Conclusions: We discuss the implications of this research for interviewing children in sexual abuse investigations and provide a set of research-based recommendations for investigative interviewers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The issue of information sharing and exchanging is one of the most important issues in the areas of artificial intelligence and knowledge-based systems (KBSs), or even in the broader areas of computer and information technology. This paper deals with a special case of this issue by carrying out a case study of information sharing between two well-known heterogeneous uncertain reasoning models: the certainty factor model and the subjective Bayesian method. More precisely, this paper discovers a family of exactly isomorphic transformations between these two uncertain reasoning models. More interestingly, among isomorphic transformation functions in this family, different ones can handle different degrees to which a domain expert is positive or negative when performing such a transformation task. The direct motivation of the investigation lies in a realistic consideration. In the past, expert systems exploited mainly these two models to deal with uncertainties. In other words, a lot of stand-alone expert systems which use the two uncertain reasoning models are available. If there is a reasonable transformation mechanism between these two uncertain reasoning models, we can use the Internet to couple these pre-existing expert systems together so that the integrated systems are able to exchange and share useful information with each other, thereby improving their performance through cooperation. Also, the issue of transformation between heterogeneous uncertain reasoning models is significant in the research area of multi-agent systems because different agents in a multi-agent system could employ different expert systems with heterogeneous uncertain reasonings for their action selections and the information sharing and exchanging is unavoidable between different agents. In addition, we make clear the relationship between the certainty factor model and probability theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DDoS is a spy-on-spy game between attackers and detectors. Attackers are mimicking network traffic patterns to disable the detection algorithms which are based on these features. It is an open problem of discriminating the mimicking DDoS attacks from massive legitimate network accessing. We observed that the zombies use controlled function(s) to pump attack packages to the victim, therefore, the attack flows to the victim are always share some properties, e.g. packages distribution behaviors, which are not possessed by legitimate flows in a short time period. Based on this observation, once there appear suspicious flows to a server, we start to calculate the distance of the package distribution behavior among the suspicious flows. If the distance is less than a given threshold, then it is a DDoS attack, otherwise, it is a legitimate accessing. Our analysis and the preliminary experiments indicate that the proposed method- can discriminate mimicking flooding attacks from legitimate accessing efficiently and effectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are attractive for information gathering in large-scale data rich environments. In order to fully exploit the data gathering and dissemination capabilities of these networks, energy-efficient and scalable solutions for data storage and information discovery are essential. In this paper, we formulate the information discovery problem as a load-balancing problem, with the combined aim being to maximize network lifetime and minimize query processing delay resulting in QoS improvements. We propose a novel information storage and distribution mechanism that takes into account the residual energy levels in individual sensors. Further, we propose a hybrid push-pull strategy that enables fast response to information discovery queries.

Simulations results prove the proposed method(s) of information discovery offer significant QoS benefits for global as well as individual queries in comparison to previous approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In disciplines other than IS, the use of covariance-based structural equation modelling (SEM) is the mainstream method for SEM analysis, and for confirmatory factor analysis (CFA). Yet a body of IS literature has developed arguing that PLS regression is a superior tool for these analyses, and for establishing reliability and validity. Despite these claims, the views underlying this PLS literature are not universally shared. In this paper the authors review the PLS and mainstream SEM literatures, and describe the key differences between the two classes of tools. The paper also canvasses why PLS regression is rarely used in management, marketing, organizational behaviour, and that branch of psychology concerned with good measurement – psychometrics. The paper offers some practical options to Australasian researchers seeking greater mastery of SEM, and also acts as a roadmap for readers who want to check for themselves what the mainstream SEM literature has to say.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSN) are attractive for information gathering in large-scale data rich environments. Emerging WSN applications require dissemination of information to interested clients within the network requiring support for differing traffic patterns. Further, in-network query processing capabilities are required for autonomic information discovery. In this paper, we formulate the information discovery problem as a load-balancing problem, with the combined aim being to maximize network lifetime and minimize query processing delay. We propose novel methods for data dissemination, information discovery and data aggregation that are designed to provide significant QoS benefits. We make use of affinity propagation to group "similar" sensors and have developed efficient mechanisms that can resolve both ALL-type and ANY-type queries in-network with improved energy-efficiency and query resolution time. Simulation results prove the proposed method(s) of information discovery offer significant QoS benefits for ALL-type and ANY-type queries in comparison to previous approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faunal atlases are landscape-level survey collections that can be used for describing spatial and temporal patterns of distribution and densities. They can also serve as a basis for quantitative analysis of factors that may influence the distributions of species. We used a subset of Birds Australia’s Atlas of Australian Birds data (January 1998 to December 2002) to examine the spatio-temporal distribution patterns of 280 selected species in eastern Australia (17–37°S and 136–152°E). Using geographical information systems, this dataset was converted into point coverage and overlaid with a vegetation polygon layer and a half-degree grid. The exploratory data analysis involved calculating species-specific reporting rates spatially, per grid and per vegetation unit, and also temporally, by month and year. We found high spatio-temporal variability in the sampling effort. Using generalised linear models on unaggregated point data, the influences of four factors – survey method and month, geographical location and habitat type – were analysed for each species. When counts of point data were attributed to grid-cells, the total number of species correlated with the total number of surveys, while the number of records per species was highly variable. Surveys had high interannual location fidelity. The predictive values of each of the four factors were species-dependent. Location and habitat were correlated and highly predictive for species with restricted distribution and strong habitat preference. Month was only of importance for migratory species. The proportion of incidental sightings was important for extremely common or extremely rare species. In conclusion, behaviour of species differed sufficiently to require building a customized model for each species to predict distribution. Simple models were effective for habitat specialists with restricted ranges, but for generalists with wide distributions even complex models gave poor predictions.