984 resultados para ARQUITETURA E ORGANIZAÇÃO DE COMPUTADORES
Resumo:
This paper discusses the dilemmas and challenges of the union of social workers in contemporary Brazil. The study is supported by the theme in a literature search, especially productions that deal with the trade union movement of workers in the brazilian reality, as well as on field research, which consisted of interviews with national trade union leaders of the CUT and CONLUTA as also representatives of national organizations representing the professional category of social workers, notably CFESS, ABEPSS ENESSO and a labor union and the national category, FENAS. The analysis of the object is oriented in the perspective of totality, considering its founding and contradictory aspects of the current socio-historical dynamics. The inflections occurred in the razilian Labor Movement in the early 1990s, during which the offensive of capital, characterized by the fusion of flexible accumulation and the dictates of neoliberal policy is established in the country, caused a profound shock in life and organization of the class working. The major repercussions of this process are evident today in the form of defensive organization of trade union struggles, notably fragile and fragmented. In the case of the category of social workers is symptomatic of the political backlash, experienced the process of reopening their unions and the creation of FENAS. This definition, part of the analysis that considers more strategic perspective of class organization, corporate antiunionism of the mass of the 1980s, built, largely, by category and expressed by the extinction of their union and unification to the broader struggles of workers with transition to unionization by industry. Given this reality, we analyze the performance of the political perspectives of the brazilian labor movement, from the characterization of organizational arrangements for trade union struggles and situate this process, the motion to reopen union of social workers, from the emergence of FENAS. Therefore, we aimed to identify the particular and the ideological and political perspectives that make up the dilemma of the trade union movement from this reopening, as corresponds to a political trend, largely, overcome within the brazilian social work
Resumo:
To the extent that the expansion of cities is increasingly pushing and segregating the working class to outlying areas, devoid of services and infrastructure, the urban space is also important as a space in the class struggle, and in this direction, the this study aims to analyze the political organization of urban social movements and popular organizations existing in Natal-RN, nowadays, in their process of struggle for social rights, with emphasis on the right to the city. With this dimension, we appropriate the contributions of historical and dialectical materialism because we believe that this benchmark enables the understanding of the processes of collective organization and a critical perspective of totality, going beyond its immediate appearance. For production data conducted literature, documentary and field, through semi-structured interviews recorded with (the) mapped leaders of organizations in our survey, as well as advisory bodies to the movements studied. The results of the study allowed us to characterize the action of the political movements in urban Christmas struggle for recognition and guarantee of the right to the city and seize the advances and obstacles in the process of intervention of social movements and popular organizations existing in Natal, highlighting dilemmas and contradictions underlie the processes of organization and mobilization in the contemporary period. Thus, we conclude that the Natal territory, as in contemporary Brazil, the urban and political action movements that show the public scene and intertwine necessarily relate to historical trend that has been performing since the 1990s, when the country entered a period marked by a new bourgeois offensive
Resumo:
Desde o final do século XX, as novas formas de produção documental e as novas tecnologias de informação apresentadas à Arquivística têm levado os profissionais da informação a repensar os conceitos e princípios arquivísticos postulados nos antigos manuais da área. Nesse contexto, destaca-se a produção arquivística canadense, que transformou o país em solo fértil para as discussões que circundam a disciplina na contemporaneidade, representando muito bem as necessidades colocadas pelos novos meios de produção documental aos arquivistas na sociedade da informação, redescobrindo princípios e (re) definindo conceitos, métodos e critérios para a criação, manutenção e uso de documentos em meio tradicional e eletrônico. Foi notadamente na década de 1980, que um novo paradigma se enunciou na área e, a partir dele, três correntes emergiram: a Arquivística Integrada - enunciada pela Escola de Québec - que propõe a reintegração da disciplina por meio do ciclo vital dos documentos e uma possível aproximação com a Ciência da Informação, graças à incorporação do termo informação orgânica registrada, como substituição ao termo documento de arquivo; a Arquivística Funcional ou Pós-Moderna, enunciada por Terry Cook - que propõe uma renovação e reformulação dos princípios e conceitos originais da disciplina, adotando a corrente Pós-moderna como pano de fundo; e a Diplomática Arquivística, enunciada primeiramente na Itália por Paola Carucci, mas desenvolvida e reformulada na América do Norte por Luciana Duranti, que busca, por meio do estudo da Diplomática, estabelecer critérios para a crítica textual dos documentos contemporâneos, garantindo ao método diplomático um posto fundamental na Arquivística contemporânea. A vista de tais aspectos, analisa-se, comparativamente, o universo epistemológico dessas três abordagens arquivísticas canadenses, enquanto perspectivas emergentes para a construção de uma disciplina contemporânea, capaz de dar conta dos novos processos de produção, organização e uso da informação orgânica registrada.
Resumo:
It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it
Resumo:
This thesis proposes an architecture of a new multiagent system framework for hybridization of metaheuristics inspired on the general Particle Swarm Optimization framework (PSO). The main contribution is to propose an effective approach to solve hard combinatory optimization problems. The choice of PSO as inspiration was given because it is inherently multiagent, allowing explore the features of multiagent systems, such as learning and cooperation techniques. In the proposed architecture, particles are autonomous agents with memory and methods for learning and making decisions, using search strategies to move in the solution space. The concepts of position and velocity originally defined in PSO are redefined for this approach. The proposed architecture was applied to the Traveling Salesman Problem and to the Quadratic Assignment Problem, and computational experiments were performed for testing its effectiveness. The experimental results were promising, with satisfactory performance, whereas the potential of the proposed architecture has not been fully explored. For further researches, the proposed approach will be also applied to multiobjective combinatorial optimization problems, which are closer to real-world problems. In the context of applied research, we intend to work with both students at the undergraduate level and a technical level in the implementation of the proposed architecture in real-world problems
Resumo:
Investigates the relationship between Information Architecture in digital environments with Intellectual Property Rights. The work is justified by the need to better understand the emerging dynamics of Digital Information and Communication Law Technologies and Intellectual Property Rights. Three areas of knowledge are directly related to the study: Information Science, Law and Computer Science. The methodology used in the investigative process is aligned with the qualitative approach. With respect to the technical procedures the research is classified as bibliographic or secondary sources. The results showed that the current Brazilian legislation does not provide the adequate mechanisms necessary to protect the intellectual property rights associated to an Information Architecture project to its holders.
Resumo:
Specificity and updating of the bibliographic classification systems can be considered a determinant factor to the quality of organization and representation of the legal documentation. In the specific case of Brazil, the Brazilian Law Decimal Classification, does not foresee specific subdivisions for Labor Law procedures. In this sense, it carries out a terminological work based on table of contents of doctrinal Labor Law books of the mentioned area, which are compared to the conceptual structure of the Brazilian Law Decimal Classification. As a result, it presents an extension proposal for Labor Procedures as well as a methodological background for further extensions and updates.
Resumo:
A maioria da soluções apresentadas como candidatas à implementação de serviços de distribuição de áudio e vídeo, têm sido projetadas levando-se em consideração determinadas condições de infra-estrutura, formato dos fluxos de vídeo a serem transmitidos, ou ainda os tipos de clientes que serão atendidos pelo serviço. Aplicações que utilizam serviços de distribuição de vídeo normalmente precisam lidar com grandes oscilações na demanda pelo serviço devido a entrada e saída de usuários do serviço. Com exemplo, basta observar a enorme variação nos níveis de audiência de programas de televisão. Este comportamento coloca um importante requisito para esta classe de sistemas distribuídos: a capacidade de reconfiguração como conseqüência de variações na demanda. Esta dissertação apresenta um estudo que envolveu o uso de agentes móveis para implementar os servidores de um serviço de distribuição de vídeo denominada DynaVideo. Uma das principais características deste serviço é a capacidade de ajustar sua configuração em conseqüência de variações na demanda. Como os servidores DynaVideo podem replicar-se e são implementados como código móvel, seu posicionamento pode ser otimizado para atender uma dada demanda e, como conseqüência, a configuração do serviço pode ser ajustada para minimizar o consumo de recursos necessários para distribuir vídeo para seus usuários. A principal contribuição desta dissertação foi provar a viabilidade do conceito de servidores implementados como agentes móveis Java baseados no ambiente de desenvolvimento de software Aglet.
Resumo:
The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
Middleware platforms have been widely used as an underlying infrastructure to the development of distributed applications. They provide distribution and heterogeneity transparency and a set of services that ease the construction of distributed applications. Nowadays, the middlewares accommodate an increasing variety of requirements to satisfy distinct application domains. This broad range of application requirements increases the complexity of the middleware, due to the introduction of many cross-cutting concerns in the architecture, which are not properly modularized by traditional programming techniques, resulting in a tangling and spread of theses concerns in the middleware code. The presence of these cross-cutting concerns limits the middleware scalability and aspect-oriented paradigm has been used successfully to improve the modularity, extensibility and customization capabilities of middleware. This work presents AO-OiL, an aspect-oriented (AO) middleware architecture, based on the AO middleware reference architecture. This middleware follows the philosophy that the middleware functionalities must be driven by the application requirements. AO-OiL consists in an AO refactoring of the OiL (Orb in Lua) middleware in order to separate basic and crosscutting concerns. The proposed architecture was implemented in Lua and RE-AspectLua. To evaluate the refactoring impact in the middleware architecture, this paper presents a comparative analysis of performance between AO-OiL and OiL
Resumo:
There is a need for multi-agent system designers in determining the quality of systems in the earliest phases of the development process. The architectures of the agents are also part of the design of these systems, and therefore also need to have their quality evaluated. Motivated by the important role that emotions play in our daily lives, embodied agents researchers have aimed to create agents capable of producing affective and natural interaction with users that produces a beneficial or desirable result. For this, several studies proposing architectures of agents with emotions arose without the accompaniment of appropriate methods for the assessment of these architectures. The objective of this study is to propose a methodology for evaluating architectures emotional agents, which evaluates the quality attributes of the design of architectures, in addition to evaluation of human-computer interaction, the effects on the subjective experience of users of applications that implement it. The methodology is based on a model of well-defined metrics. In assessing the quality of architectural design, the attributes assessed are: extensibility, modularity and complexity. In assessing the effects on users' subjective experience, which involves the implementation of the architecture in an application and we suggest to be the domain of computer games, the metrics are: enjoyment, felt support, warm, caring, trust, cooperation, intelligence, interestingness, naturalness of emotional reactions, believabiliy, reducing of frustration and likeability, and the average time and average attempts. We experimented with this approach and evaluate five architectures emotional agents: BDIE, DETT, Camurra-Coglio, EBDI, Emotional-BDI. Two of the architectures, BDIE and EBDI, were implemented in a version of the game Minesweeper and evaluated for human-computer interaction. In the results, DETT stood out with the best architectural design. Users who have played the version of the game with emotional agents performed better than those who played without agents. In assessing the subjective experience of users, the differences between the architectures were insignificant
Resumo:
The tracking between models of the requirements and architecture activities is a strategy that aims to prevent loss of information, reducing the gap between these two initial activities of the software life cycle. In the context of Software Product Lines (SPL), it is important to have this support, which allows the correspondence between this two activities, with management of variability. In order to address this issue, this paper presents a process of bidirectional mapping, defining transformation rules between elements of a goaloriented requirements model (described in PL-AOVgraph) and elements of an architectural description (defined in PL-AspectualACME). These mapping rules are evaluated using a case study: the GingaForAll LPS. To automate this transformation, we developed the MaRiPLA tool (Mapping Requirements to Product Line Architecture), through MDD techniques (Modeldriven Development), including Atlas Transformation Language (ATL) with specification of Ecore metamodels jointly with Xtext , a DSL definition framework, and Acceleo, a code generation tool, in Eclipse environment. Finally, the generated models are evaluated based on quality attributes such as variability, derivability, reusability, correctness, traceability, completeness, evolvability and maintainability, extracted from the CAFÉ Quality Model
Resumo:
In the world we are constantly performing everyday actions. Two of these actions are frequent and of great importance: classify (sort by classes) and take decision. When we encounter problems with a relatively high degree of complexity, we tend to seek other opinions, usually from people who have some knowledge or even to the extent possible, are experts in the problem domain in question in order to help us in the decision-making process. Both the classification process as the process of decision making, we are guided by consideration of the characteristics involved in the specific problem. The characterization of a set of objects is part of the decision making process in general. In Machine Learning this classification happens through a learning algorithm and the characterization is applied to databases. The classification algorithms can be employed individually or by machine committees. The choice of the best methods to be used in the construction of a committee is a very arduous task. In this work, it will be investigated meta-learning techniques in selecting the best configuration parameters of homogeneous committees for applications in various classification problems. These parameters are: the base classifier, the architecture and the size of this architecture. We investigated nine types of inductors candidates for based classifier, two methods of generation of architecture and nine medium-sized groups for architecture. Dimensionality reduction techniques have been applied to metabases looking for improvement. Five classifiers methods are investigated as meta-learners in the process of choosing the best parameters of a homogeneous committee.
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared