890 resultados para distributed meta classifiers
Resumo:
O projeto que se apresenta tem por finalidade investigar processos de aperfeiçoamento da competência de escrita do texto argumentativo de alunos do Ensino Secundário. Considerou-se as contribuições convergentes das teorias e técnicas da psicologia cognitiva e de perspetivas sociais da linguagem. Em consequência, associou-se o modelo de revisão de Hayes e Flower (1983; 1980), e de Hayes, Flower, Schriver, Stratman e Carey (1987), ao modelo de análise do discurso de Bronckart (2004; 1996) e à proposta linguística de Adam (2006; 1992). A hipótese que sustenta a investigação é que a consciência (meta)linguística pode facilitar a revisão textual e o aperfeiçoamento da competência de escrita de textos argumentativos, em contexto de Oficina de Escrita. A investigação conjugou duas fases. Na fase intensiva do estudo de caso, os alunos do 11.º ano de Português de uma Escola do Porto progrediram, através de trabalho continuado em Oficina de Escrita. Após a experiência, mesmo alunos com dificuldades revelaram domínio de metalinguagem, mais consciência (meta)linguística na revisão e autorregulação da sua aprendizagem. Na fase extensiva, foi aplicado um inquérito por questionário a uma amostra probabilística de professores do Secundário, igualmente do Porto. Os docentes valorizaram os aspetos supracitados, bem como a leitura de textos argumentativos e o treino do conhecimento explícito da língua. Contudo, em triangulação, as respostas indiciam um ensino centrado no professor, pouco aberto a projetos de escrita, a novas tecnologias e à divulgação de textos à Escola e ao meio. Os resultados indicam que a competência de escrita de textos argumentativos é passível de aperfeiçoamento em Oficina de Escrita, através de interiorização do género textual e aprofundamento da competência (meta)linguística. No contexto do estudo, comprova-se a influência de um treino de escrita inserido em Projetos de Escola, tendo o aluno como sujeito da sua própria aprendizagem, em interação com o meio.
Resumo:
This paper is organized in the following way. First I deal with Hardt’s and Negri’s Empire; the second section of the paper focuses on Beck´s World Risk Society; the third main section of this paper tackles the functional differentiation argument posed by Buzan and Albert. By way of conclusion, the final section of this paper briefly discusses alternatives to grand-narratives and master concepts.
Resumo:
We run a standard income convergence analysis for the last decade and confirm an already established finding in the growth economics literature. EU countries are converging. Regions in Europe are also converging. But, within countries, regional disparities are on the rise. At the same time, there is probably no reason for EU Cohesion Policy to be concerned with what happens inside countries. Ultimately, our data shows that national governments redistribute well across regions, whether they are fiscally centralised or decentralised. It is difficult to establish if Structural and Cohesion Funds play any role in recent growth convergence patterns in Europe. Generally, macroeconomic simulations produce better results than empirical tests. It is thus possible that Structural Funds do not fully realise their potential either because they are not efficiently allocated or are badly managed or are used for the wrong investments, or a combination of all three. The approach to assess the effectiveness of EU funds should be consistent with the rationale behind the post-1988 EU Cohesion Policy. Standard income convergence analysis is certainly not sufficient and should be accompanied by an assessment of the changes in the efficiency of the capital stock in the recipient countries or regions as well as by a more qualitative assessment. EU funds for competitiveness and employment should be allocated by looking at each region’s capital efficiency to maximise growth generating effects or on a pure competitive.
Resumo:
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.
Resumo:
We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.
Resumo:
Recently, two approaches have been introduced that distribute the molecular fragment mining problem. The first approach applies a master/worker topology, the second approach, a completely distributed peer-to-peer system, solves the scalability problem due to the bottleneck at the master node. However, in many real world scenarios the participating computing nodes cannot communicate directly due to administrative policies such as security restrictions. Thus, potential computing power is not accessible to accelerate the mining run. To solve this shortcoming, this work introduces a hierarchical topology of computing resources, which distributes the management over several levels and adapts to the natural structure of those multi-domain architectures. The most important aspect is the load balancing scheme, which has been designed and optimized for the hierarchical structure. The approach allows dynamic aggregation of heterogenous computing resources and is applied to wide area network scenarios.
Resumo:
This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.
Resumo:
In real world applications sequential algorithms of data mining and data exploration are often unsuitable for datasets with enormous size, high-dimensionality and complex data structure. Grid computing promises unprecedented opportunities for unlimited computing and storage resources. In this context there is the necessity to develop high performance distributed data mining algorithms. However, the computational complexity of the problem and the large amount of data to be explored often make the design of large scale applications particularly challenging. In this paper we present the first distributed formulation of a frequent subgraph mining algorithm for discriminative fragments of molecular compounds. Two distributed approaches have been developed and compared on the well known National Cancer Institute’s HIV-screening dataset. We present experimental results on a small-scale computing environment.