984 resultados para Computational-Linguistic resource
Resumo:
Computionally efficient sequential learning algorithms are developed for direct-link resource-allocating networks (DRANs). These are achieved by decomposing existing recursive training algorithms on a layer by layer and neuron by neuron basis. This allows network weights to be updated in an efficient parallel manner and facilitates the implementation of minimal update extensions that yield a significant reduction in computation load per iteration compared to existing sequential learning methods employed in resource-allocation network (RAN) and minimal RAN (MRAN) approaches. The new algorithms, which also incorporate a pruning strategy to control network growth, are evaluated on three different system identification benchmark problems and shown to outperform existing methods both in terms of training error convergence and computational efficiency. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Multicore computational accelerators such as GPUs are now commodity components for highperformance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.
Resumo:
In this research, an agent-based model (ABM) was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval). This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.
Resumo:
Distributed Energy Resources (DER) scheduling in smart grids presents a new challenge to system operators. The increase of new resources, such as storage systems and demand response programs, results in additional computational efforts for optimization problems. On the other hand, since natural resources, such as wind and sun, can only be precisely forecasted with small anticipation, short-term scheduling is especially relevant requiring a very good performance on large dimension problems. Traditional techniques such as Mixed-Integer Non-Linear Programming (MINLP) do not cope well with large scale problems. This type of problems can be appropriately addressed by metaheuristics approaches. This paper proposes a new methodology called Signaled Particle Swarm Optimization (SiPSO) to address the energy resources management problem in the scope of smart grids, with intensive use of DER. The proposed methodology’s performance is illustrated by a case study with 99 distributed generators, 208 loads, and 27 storage units. The results are compared with those obtained in other methodologies, namely MINLP, Genetic Algorithm, original Particle Swarm Optimization (PSO), Evolutionary PSO, and New PSO. SiPSO performance is superior to the other tested PSO variants, demonstrating its adequacy to solve large dimension problems which require a decision in a short period of time.
Resumo:
This paper presents a biased random-key genetic algorithm for the resource constrained project scheduling problem. The chromosome representation of the problem is based on random keys. Active schedules are constructed using a priority-rule heuristic in which the priorities of the activities are defined by the genetic algorithm. A forward-backward improvement procedure is applied to all solutions. The chromosomes supplied by the genetic algorithm are adjusted to reflect the solutions obtained by the improvement procedure. The heuristic is tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
The energy resource scheduling is becoming increasingly important, as the use of distributed resources is intensified and massive gridable vehicle (V2G) use is envisaged. This paper presents a methodology for day-ahead energy resource scheduling for smart grids considering the intensive use of distributed generation and V2G. The main focus is the comparison of different EV management approaches in the day-ahead energy resources management, namely uncontrolled charging, smart charging, V2G and Demand Response (DR) programs i n the V2G approach. Three different DR programs are designed and tested (trip reduce, shifting reduce and reduce+shifting). Othe r important contribution of the paper is the comparison between deterministic and computational intelligence techniques to reduce the execution time. The proposed scheduling is solved with a modified particle swarm optimization. Mixed integer non-linear programming is also used for comparison purposes. Full ac power flow calculation is included to allow taking into account the network constraints. A case study with a 33-bus distribution network and 2000 V2G resources is used to illustrate the performance of the proposed method.
Resumo:
The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.
Resumo:
A novel Swarm Intelligence method for best-fit search, Stochastic Diffusion Search, is presented capable of rapid location of the optimal solution in the search space. Population based search mechanisms employed by Swarm Intelligence methods can suffer lack of convergence resulting in ill defined stopping criteria and loss of the best solution. Conversely, as a result of its resource allocation mechanism, the solutions SDS discovers enjoy excellent stability.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The objective of this research the objective is to develop a theoretical-conceptual model of syntactic and semantic of bibliographic records, from Linguistic studies of Saussure and Hjelmslev of manifestations of human language, which subsidizes the development of a computacional interpreter, focused to the conversion of bibliographic records to MARC21 Bibliographic Format, which can be confirmed both the semantic value of the informational resource represented as the reliability of the representation. Given the aforementioned objectives, the methodological trajectory of the research is based on the qualitative approach, of an exploratory, descriptive and experimental nature, and with recourse to the literature. Contributions on the theoretical plane can be envisaged regarding the development of questions inherent to the syntactic and semantic aspects of bibliographic records, and by involving, at the same time, interdisciplinarity between Information Science, Computer Science and Linguistics. Contributions to the practical field are identified by the fact the study covers the development of the Scan for MARC, a computational interpreter that can be adopted by any institution that wishes to use the conversion procedure for bibliographic record databases to the MARC21 Bibliographic Format from description and visualization schemes of bibliographic records (AACR2r and ISBD), an aspect of the research which is considered innovative.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Linguística e Língua Portuguesa - FCLAR