874 resultados para Expert systems (Computer science)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, Brazil has pioneered an innovative model of branchless banking, known as correspondent banking, involving distribution partnership between banks, several kinds of retailers and a variety of other participants, which have allowed an unprecedented growth in bank outreach and became a reference worldwide. However, despite the extensive number of studies recently developed focusing on Brazilian branchless banking, there exists a clear research gap in the literature. It is still necessary to identify the different business configurations involving network integration through which the branchless banking channel can be structured, as well as the way they relate to the range of bank services delivered. Given this gap, our objective is to investigate the relationship between network integration models and services delivered through the branchless banking channel. Based on twenty interviews with managers involved with the correspondent banking business and data collected on almost 300 correspondent locations, our research is developed in two steps. First, we created a qualitative taxonomy through which we identified three classes of network integration models. Second, we performed a cluster analysis to explain the groups of financial services that fit each model. By contextualizing correspondents' network integration processes through the lens of transaction costs economics, our results suggest that the more suited to deliver social-oriented, "pro-poor'' services the channel is, the more it is controlled by banks. This research offers contributions to managers and policy makers interested in understanding better how different correspondent banking configurations are related with specific portfolios of services. Researchers interested in the subject of branchless banking can also benefit from the taxonomy presented and the transaction costs analysis of this kind of banking channel, which has been adopted in a number of developing countries all over the world now. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we study the performance evaluation of resource-aware business process models. We define a new framework that allows the generation of analytical models for performance evaluation from business process models annotated with resource management information. This framework is composed of a new notation that allows the specification of resource management constraints and a method to convert a business process specification and its resource constraints into Stochastic Automata Networks (SANs). We show that the analysis of the generated SAN model provides several performance indices, such as average throughput of the system, average waiting time, average queues size, and utilization rate of resources. Using the BP2SAN tool - our implementation of the proposed framework - and a SAN solver (such as the PEPS tool) we show through a simple use-case how a business specialist with no skills in stochastic modeling can easily obtain performance indices that, in turn, can help to identify bottlenecks on the model, to perform workload characterization, to define the provisioning of resources, and to study other performance related aspects of the business process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This combined PET and ERP study was designed to identify the brain regions activated in switching and divided attention between different features of a single object using matched sensory stimuli and motor response. The ERP data have previously been reported in this journal [64]. We now present the corresponding PET data. We identified partially overlapping neural networks with paradigms requiring the switching or dividing of attention between the elements of complex visual stimuli. Regions of activation were found in the prefrontal and temporal cortices and cerebellum. Each task resulted in different prefrontal cortical regions of activation lending support to the functional subspecialisation of the prefrontal and temporal cortices being based on the cognitive operations required rather than the stimuli themselves. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power systems rely greatly on ancillary services in maintaining operation security. As one of the most important ancillary services, spinning reserve must be provided effectively in the deregulated market environment. This paper focuses on the design of an integrated market for both electricity and spinning reserve service with particular emphasis on coordinated dispatch of bulk power and spinning reserve services. A new market dispatching mechanism has been developed to minimize the ISO's total payment while ensuring system security. Genetic algorithms are used in the finding of the global optimal solutions for this dispatching problem. Case studies and corresponding analyses haw been carried out to demonstrate and discuss the efficiency and usefulness of the proposed market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genetic algorithms (GAs) are known to locate the global optimal solution provided sufficient population and/or generation is used. Practically, a near-optimal satisfactory result can be found by Gas with a limited number of generations. In wireless communications, the exhaustive searching approach is widely applied to many techniques, such as maximum likelihood decoding (MLD) and distance spectrum (DS) techniques. The complexity of the exhaustive searching approach in the MLD or the DS technique is exponential in the number of transmit antennas and the size of the signal constellation for the multiple-input multiple-output (MIMO) communication systems. If a large number of antennas and a large size of signal constellations, e.g. PSK and QAM, are employed in the MIMO systems, the exhaustive searching approach becomes impractical and time consuming. In this paper, the GAs are applied to the MLD and DS techniques to provide a near-optimal performance with a reduced computational complexity for the MIMO systems. Two different GA-based efficient searching approaches are proposed for the MLD and DS techniques, respectively. The first proposed approach is based on a GA with sharing function method, which is employed to locate the multiple solutions of the distance spectrum for the Space-time Trellis Coded Orthogonal Frequency Division Multiplexing (STTC-OFDM) systems. The second approach is the GA-based MLD that attempts to find the closest point to the transmitted signal. The proposed approach can return a satisfactory result with a good initial signal vector provided to the GA. Through simulation results, it is shown that the proposed GA-based efficient searching approaches can achieve near-optimal performance, but with a lower searching complexity comparing with the original MLD and DS techniques for the MIMO systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DNA Microarray is a powerful tool to measure the level of a mixed population of nucleic acids at one time, which has great impact in many aspects of life sciences research. In order to distinguish nucleic acids with very similar composition by hybridization, it is necessary to design microarray probes with high specificities and sensitivities. Highly specific probes correspond to probes having unique DNA sequences; whereas highly sensitive probes correspond to those with melting temperature within a desired range and having no secondary structure. The selection of these probes from a set of functional DNA sequences (exons) constitutes a computationally expensive discrete non-linear search problem. We delegate the search task to a simple yet effective Evolution Strategy algorithm. The computational efficiency is also greatly improved by making use of an available bioinformatics tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Foreign exchange trading has emerged recently as a significant activity in many countries. As with most forms of trading, the activity is influenced by many random parameters so that the creation of a system that effectively emulates the trading process will be very helpful. A major issue for traders in the deregulated Foreign Exchange Market is when to sell and when to buy a particular currency in order to maximize profit. This paper presents novel trading strategies based on the machine learning methods of genetic algorithms and reinforcement learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biological wastewater treatment is a complex, multivariate process, in which a number of physical and biological processes occur simultaneously. In this study, principal component analysis (PCA) and parallel factor analysis (PARAFAC) were used to profile and characterise Lagoon 115E, a multistage biological lagoon treatment system at Melbourne Water's Western Treatment Plant (WTP) in Melbourne, Australia. In this study, the objective was to increase our understanding of the multivariate processes taking place in the lagoon. The data used in the study span a 7-year period during which samples were collected as often as weekly from the ponds of Lagoon 115E and subjected to analysis. The resulting database, involving 19 chemical and physical variables, was studied using the multivariate data analysis methods PCA and PARAFAC. With these methods, alterations in the state of the wastewater due to intrinsic and extrinsic factors could be discerned. The methods were effective in illustrating and visually representing the complex purification stages and cyclic changes occurring along the lagoon system. The two methods proved complementary, with each having its own beneficial features. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A framework for developing marketing category management decision support systems (DSS) based upon the Bayesian Vector Autoregressive (BVAR) model is extended. Since the BVAR model is vulnerable to permanent and temporary shifts in purchasing patterns over time, a form that can correct for the shifts and still provide the other advantages of the BVAR is a Bayesian Vector Error-Correction Model (BVECM). We present the mechanics of extending the DSS to move from a BVAR model to the BVECM model for the category management problem. Several additional iterative steps are required in the DSS to allow the decision maker to arrive at the best forecast possible. The revised marketing DSS framework and model fitting procedures are described. Validation is conducted on a sample problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information security devices must preserve security properties even in the presence of faults. This in turn requires a rigorous evaluation of the system behaviours resulting from component failures, especially how such failures affect information flow. We introduce a compositional method of static analysis for fail-secure behaviour. Our method uses reachability matrices to identify potentially undesirable information flows based on the fault modes of the system's components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online communities have evolved beyond the realm of social phenomenon to become important knowledge-sharing media with real economic consequences. However, the sharing of knowledge and the communication of meaning through Internet technology presents many difficulties. This is particularly so for online finance forums where market-sensitive information and disinformation about exchange-traded stocks is regularly disseminated. The development of trust and the effect of misinformation in this environment are important in the growth of this communication medium. Forum administrators need to better understand and handle the development of trust. In this article, we analyze and discuss the communicative practices of a group of investors and members of an online community of interest. We found that conflict as a driver of knowledge sharing is an important consideration for forum administrators and designers.