81 resultados para Projeto de arquitetura
Resumo:
In the mid-1980s, the magazine Projeto published the Actual Brazilian Architecture catalogue presenting texts by Hugo Segawa and Ruth Verde Zein with a corpus of works and engaged architects of the 1960s and 1970s. To comprehend the Brazilian architectural production post-1964, in those years of the 1980s, became a significant mission to reactivate the Brazilian architectural debate weakened by the military dictatorship. In his doctoral thesis Spadoni (2003) deals with the different ways which characterizes the Brazilian architectural production of the 1970s. Marked by inventiveness, this production was in tune with the modern thinking and in the transition period between the 1970s and the 1980s it synchronized with the international debate about post-modern architecture. Considering Spadoni s doctoral thesis, this work deals with the modern experience observed in the one-family-houses built in the seventies in João Pessoa. Some modern experiences were not clear outside, to observe it, it was necessary to search for the type of experience into the spatial disposition and of the know-how constructive, because into the appearance some houses not make explicit the use of the modern language. Other observed experiences allude to the repertoire of the Brazilian period in the years 1940s-1960s, to the experience of the modern architecture in São Paulo of the 1960s, to the experiences in which the climate of the Northeastern region strongly influenced the architectural conception. We can also find in a reduced number of houses a particular experience: it refers to experiences that expose the constructive doing, which leave the material apparent and apply to the residential type the experience of the industrial pre-fabricated buildings
Resumo:
The purpose of this dissertation is the architectural project of the ambulatory complex of the Federal University of Pará in Belém. It is a health care establishment whose focus is sustainability, energy efficiency and humanization. This design went through the application of architectural concepts, the study of references (theorical and empirical ones), planning, examining the terrain and its conditions and the preliminay design and resulted in a preliminary architecture blueprint. The empirical research is based on the main building of the Hospital Universitário João de Barros Barreto in Belém, Hospital Sarah Kubitschek of Fortaleza (Architect João Filgueiras de Lima - Lelé) and Hospital e Maternity São Luiz of São Paulo (Architect Siegbert Zanettini). Part of the planning is based on the method "Problem Seeking of Pena and Parshal (2001)". During the development process I sought to incorporate sustainability criterias, energy efficiency and humanization. In relation to sustainability the dissertation focuses on the utilization of rainwater for non-potable usage
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
Confirming the Brazilian tendency in the field, the multifamily vertical condominium habitats in Natal are defined as buildings with three or more floors which have been an increasingly used solution. In the mentioned project, the connection between the projectarchitects and the user/ buyers are spread out, by which the first people conceive the realestate property as a creation for the market and not the individual client in specific. Such situation along with technical and financial limitations of the project lead to the adoption of standard solutions to be utilized by clients with different profiles. Besides that, there are various legal and urban parameters by the City Director Plan showing elements of great influence in the final solution adopted by the mentioned edifices. Moving to this subject in general, this project is focused on the case study of the Ed. Ville de Montpellier, having as a base of Post Occupancy Evaluation (POE), considered an efficient tool to analyze and keep up with the progress of the construction of the building, including technical approvals, the application of surveys with the local residents and the creation of informal interviews. The data shows that with time some items that initially motivated the acquisition of the realestate property (with a social common area) move to being less valued, and that the residents quickly alter the pre constructed space, thus seeking to alter the property in a more personal and conforming manner. The possibility of a new emphasis for projects on the mentioned space should also be in discussion, the created project calls for the attention of projected conceptual aspects and interdependence within project and construction which permits the indication of some recommendations for the projection of multifamily residential buildings within the studied realm
Resumo:
Natural air ventilation is the most import passive strategy to provide thermal comfort in hot and humid climates and a significant low energy strategy. However, the natural ventilated building requires more attention with the architectural design than a conventional building with air conditioning systems, and the results are less reliable. Therefore, this thesis focuses on softwares and methods to predict the natural ventilation performance from the point of view of the architect, with limited resource and knowledge of fluid mechanics. A typical prefabricated building was modelled due to its simplified geometry, low cost and occurrence at the local campus. Firstly, the study emphasized the use of computational fluid dynamics (CFD) software, to simulate the air flow outside and inside the building. A series of approaches were developed to make the simulations possible, compromising the results fidelity. Secondly, the results of CFD simulations were used as the input of an energy tool, to simulate the thermal performance under different rates of air renew. Thirdly, the results of temperature were assessed in terms of thermal comfort. Complementary simulations were carried out to detail the analyses. The results show the potentialities of these tools. However the discussions concerning the simplifications of the approaches, the limitations of the tools and the level of knowledge of the average architect are the major contribution of this study
Resumo:
A conceptual discussion on architectural type and its role in theory and practice supports the construction of an analytical tool used for recognizing the typological evolution of hospital architecture in Western societies. The same tool is applied to analyze the typological evolution of hospital architecture in Natal, Brazil, through a sample of eighteen hospitals built in the city since the beginnings of 20th century. The conclusion is that typological evolution in Natal is almost the same as occidental one, except for a few singularities that can be explained by local social and economic development
Resumo:
A hierarchical fuzzy control scheme is applied to improve vibration suppression by using an electro-mechanical system based on the lever principle. The hierarchical intelligent controller consists of a hierarchical fuzzy supervisor, one fuzzy controller and one robust controller. The supervisor combines controllers output signal to generate the control signal that will be applied on the plant. The objective is to improve the performance of the electromechanical system, considering that the supervisor could take advantage of the different techniques based controllers. The robust controller design is based on a linear mathematical model. Genetic algorithms are used on the fuzzy controller and the supervisor tuning, which are based on non-linear mathematical model. In order to attest the efficiency of the hierarchical fuzzy control scheme, digital simulations were employed. Some comparisons involving the optimized hierarchical controller and the non-optimized hierarchical controller will be made to prove the efficiency of the genetic algorithms and the advantages of its use
Resumo:
We propose in this work a software architecture for robotic boats intended to act in diverse aquatic environments, fully autonomously, performing telemetry to a base station and getting this mission to be accomplished. This proposal aims to apply within the project N-Boat Lab NatalNet DCA, which aims to empower a sailboat navigating autonomously. The constituent components of this architecture are the memory modules, strategy, communication, sensing, actuation, energy, security and surveillance, making these systems the boat and base station. To validate the simulator was developed in C language and implemented using the graphics API OpenGL resources, whose main results were obtained in the implementation of memory, performance and strategy modules, more specifically data sharing, control of sails and rudder and planning short routes based on an algorithm for navigation, respectively. The experimental results, shown in this study indicate the feasibility of the actual use of the software architecture developed and their application in the area of autonomous mobile robotics
Resumo:
It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it
Resumo:
Aspect Oriented approaches associated to different activities of the software development process are, in general, independent and their models and artifacts are not aligned and inserted in a coherent process. In the model driven development, the various models and the correspondence between them are rigorously specified. With the integration of aspect oriented software development (DSOA) and model driven development (MDD) it is possible to automatically propagate models from one activity to another, avoiding the loss of information and important decisions established in each activity. This work presents MARISA-MDD, a strategy based on models that integrate aspect-oriented requirements, architecture and detailed design, using the languages AOV-graph, AspectualACME and aSideML, respectively. MARISA-MDD defines, for each activity, representative models (and corresponding metamodels) and a number of transformations between the models of each language. These transformations have been specified and implemented in ATL (Atlas Definition Language), in the Eclipse environment. MARISA-MDD allows the automatic propagation between AOV-graph, AspectualACME, and aSideML models. To validate the proposed approach two case studies, the Health Watcher and the Mobile Media have been used in the MARISA-MDD environment for the automatic generation of AspectualACME and aSideML models, from the AOV-graph model
Resumo:
Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television
Resumo:
There is a need for multi-agent system designers in determining the quality of systems in the earliest phases of the development process. The architectures of the agents are also part of the design of these systems, and therefore also need to have their quality evaluated. Motivated by the important role that emotions play in our daily lives, embodied agents researchers have aimed to create agents capable of producing affective and natural interaction with users that produces a beneficial or desirable result. For this, several studies proposing architectures of agents with emotions arose without the accompaniment of appropriate methods for the assessment of these architectures. The objective of this study is to propose a methodology for evaluating architectures emotional agents, which evaluates the quality attributes of the design of architectures, in addition to evaluation of human-computer interaction, the effects on the subjective experience of users of applications that implement it. The methodology is based on a model of well-defined metrics. In assessing the quality of architectural design, the attributes assessed are: extensibility, modularity and complexity. In assessing the effects on users' subjective experience, which involves the implementation of the architecture in an application and we suggest to be the domain of computer games, the metrics are: enjoyment, felt support, warm, caring, trust, cooperation, intelligence, interestingness, naturalness of emotional reactions, believabiliy, reducing of frustration and likeability, and the average time and average attempts. We experimented with this approach and evaluate five architectures emotional agents: BDIE, DETT, Camurra-Coglio, EBDI, Emotional-BDI. Two of the architectures, BDIE and EBDI, were implemented in a version of the game Minesweeper and evaluated for human-computer interaction. In the results, DETT stood out with the best architectural design. Users who have played the version of the game with emotional agents performed better than those who played without agents. In assessing the subjective experience of users, the differences between the architectures were insignificant
Resumo:
The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared
Resumo:
The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed
Resumo:
Computational Intelligence Methods have been expanding to industrial applications motivated by their ability to solve problems in engineering. Therefore, the embedded systems follow the same idea of using computational intelligence tools embedded on machines. There are several works in the area of embedded systems and intelligent systems. However, there are a few papers that have joined both areas. The aim of this study was to implement an adaptive fuzzy neural hardware with online training embedded on Field Programmable Gate Array – FPGA. The system adaptation can occur during the execution of a given application, aiming online performance improvement. The proposed system architecture is modular, allowing different configurations of fuzzy neural network topologies with online training. The proposed system was applied to: mathematical function interpolation, pattern classification and selfcompensation of industrial sensors. The proposed system achieves satisfactory performance in both tasks. The experiments results shows the advantages and disadvantages of online training in hardware when performed in parallel and sequentially ways. The sequentially training method provides economy in FPGA area, however, increases the complexity of architecture actions. The parallel training method achieves high performance and reduced processing time, the pipeline technique is used to increase the proposed architecture performance. The study development was based on available tools for FPGA circuits.