999 resultados para Pequenas e médias empresas - Estudos de caso


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A partir das múltiplas representações que se reproduzem sobre a cidade de Duque de Caxias ao longo dos anos, imaginários e identidades sociais são fortalecidos e transformados. No entanto, autorrepresentações, provindas de agentes culturais da cidade organizados em uma rede, que muito se fortalece com o uso da internet, buscam desconstruir estereótipos e reivindicar a cidade que eles querem e julgam ter direito. Neste contexto, a primeira parte do trabalho se dedica a uma investigação sobre como a cidade de Duque de Caxias foi e é representada por atores diversos, incluindo seus próprios moradores. São ressaltadas as aproximações entre a produção cultural dessa rede com a construção e ressignificação de símbolos e imaginários da cidade. São realizados, ainda, estudos de caso sobre o cineclube Mate com Angu, o coletivo de quadrinistas Capa Comics e o blog Lurdinha. A segunda parte do trabalho versa sobre as relações afetivas entre a população duque caxiense e os territórios habitados, em muito transformadas pelas produções culturais que enfatizam o pertencimento à cidade. O incentivo ao melhor uso dos espaços públicos e a criação de equipamentos de lazer se apresentam como essenciais neste processo. Os resultados dessa pesquisa apresentam uma reflexão sobre as potencialidades da internet para o fortalecimento das relações em rede, para oportunizar autorrepresentações e, ainda, para provocar mobilidades. As transformações ocorridas em torno das percepções sobre a cidade, a compreensão de pertencimento a ela e a apropriação dos espaços também são pontos destacados nesta dissertação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aims to describe and demonstrate the developed concept to facilitate the use of thermal simulation tools during the building design process. Despite the impact of architectural elements on the performance of buildings, some influential decisions are frequently based solely on qualitative information. Even though such design support is adequate for most decisions, the designer will eventually have doubts concerning the performance of some design decisions. These situations will require some kind of additional knowledge to be properly approached. The concept of designerly ways of simulating focuses on the formulation and solution of design dilemmas, which are doubts about the design that cannot be fully understood nor solved without using quantitative information. The concept intends to combine the power of analysis from computer simulation tools with the capacity of synthesis from architects. Three types of simulation tools are considered: solar analysis, thermal/energy simulation and CFD. Design dilemmas are formulated and framed according to the architect s reflection process about performance aspects. Throughout the thesis, the problem is investigated in three fields: professional, technical and theoretical fields. This approach on distinct parts of the problem aimed to i) characterize different professional categories with regards to their design practice and use of tools, ii) investigate preceding researchers on the use of simulation tools and iii) draw analogies between the proposed concept, and some concepts developed or described in previous works about design theory. The proposed concept was tested in eight design dilemmas extracted from three case studies in the Netherlands. The three investigated processes are houses designed by Dutch architectural firms. Relevant information and criteria from each case study were obtained through interviews and conversations with the involved architects. The practical application, despite its success in the research context, allowed the identification of some applicability limitations of the concept, concerning the architects need to have technical knowledge and the actual evolution stage of simulation tools

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines, through case studies, the organization of the production process of architectural projects in architecture offices in the city of Natal, specifically in relation to building projects. The specifics of the design process in architecture, the production of the project in a professional field in Natal, are studied in light of theories of design and its production process. The survey, in its different phases, was conducted between March 2010 and September 2012 and aimed to identify, understand, and analyze comparatively, by mapping the design process, the organization of production of building projects in two offices in Natal, checking as well the relationships of their agents during the process. The project was based on desk research and exploration, adopting, for both, data collection tools such as forms, questionnaires, and interviews. With the specific aim of mapping the design process, we adopted a technique that allows obtaining the information directly from employee agents involved in the production process. The technique consisted of registering information by completing daily, during or at the end of the workday, an individual virtual agenda, in which all agent collaborators described the tasks performed. The data collected allowed for the identification of the organizational structure of the office, its hierarchy, the responsibilities of agents, as well as the tasks performed by them during the two months of monitoring at each office. The research findings were based on analyses of data collected in the two offices and on comparative studies between the results of these analyses. The end result was a diagnostic evaluation that considered the level of organization and elaborated this perspective, as well as proposed solutions aimed at improving both the organization of the process and the relationships between the agents under the lens analyzed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Steam injection is a method usually applied to very viscous oils and consists of injecting heat to reduce the viscosity and, therefore, increase the oil mobility, improving the oil production. For designing a steam injection project it is necessary to have a reservoir simulation in order to define the various parameters necessary for an efficient heat reservoir management, and with this, improve the recovery factor of the reservoir. The purpose of this work is to show the influence of the coupled wellbore/reservoir on the thermal simulation of reservoirs under cyclic steam stimulation. In this study, the methodology used in the solution of the problem involved the development of a wellbore model for the integration of steam flow model in injection wellbores, VapMec, and a blackoil reservoir model for the injection of cyclic steam in oil reservoirs. Thus, case studies were developed for shallow and deep reservoirs, whereas the usual configurations of injector well existing in the oil industry, i.e., conventional tubing without packer, conventional tubing with packer and insulated tubing with packer. A comparative study of the injection and production parameters was performed, always considering the same operational conditions, for the two simulation models, non-coupled and a coupled model. It was observed that the results are very similar for the specified well injection rate, whereas significant differences for the specified well pressure. Finally, on the basis of computational experiments, it was concluded that the influence of the coupled wellbore/reservoir in thermal simulations using cyclic steam injection as an enhanced oil recovery method is greater for the specified well pressure, while for the specified well injection rate, the steam flow model for the injector well and the reservoir may be simulated in a non- coupled way

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzes the development experience in the Territories of Mato Grande and Sertão do Apodi in the state of Rio Grande do Norte, evaluating the actions of the National Program for Strengthening Family Agriculture, specifically the line of infrastructure (PRONAF-INFRA), and the National Program for Sustainable Development of Rural Territories (PRONAT) in these territories. Summarizes the various rural development approaches and takes the theoretical assumptions of territorial development, the concept of constructed territory and market-plan territory, further the cycle model to analyze public policies selected these experiences. Thus, we propose to test the hypothesis that most of the actions implemented would lead to the formation of market-plan territories, in other words, perceived only as a platform for the presentation of projects. The literature and documents, combined with case studies, interviews and direct observation of the meetings of committees, showed that, despite two boards are under the same laws, rules and formal regulations, have clear differences when considering the theory and concepts that were used as reference. The Apodi s territory is closer to a constructed space thus the search for a broader agenda, more autonomous and more appropriate to the reality experienced by local actors. On other hand the Territory of Mato Grande had the characteristics of a market-plan territory more present. As the result, the territory of Sertão do Apodi accesses not only as part of a greater number of policies and funding sources, ensuring a greater and more diverse investment volume than the territory of Mato Grande. Despite these differences, studies have shown that territorial boards surveyed are still far from becoming the main forum for managing the development from conception planning socially constructed. Showed, finally, that territorial development strategy is relevant, but requires a long walk and a deep and continuous learning process to be successfully implemented in rural areas of Northeast Brazil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a modification on ANFIS (Adaptive Network Based Fuzzy Inference System) structure is proposed to find a systematic method for nonlinear plants, with large operational range, identification and control, using linear local systems: models and controllers. This method is based on multiple model approach. This way, linear local models are obtained and then those models are combined by the proposed neurofuzzy structure. A metric that allows a satisfactory combination of those models is obtained after the structure training. It results on plant s global identification. A controller is projected for each local model. The global control is obtained by mixing local controllers signals. This is done by the modified ANFIS. The modification on ANFIS architecture allows the two neurofuzzy structures knowledge sharing. So the same metric obtained to combine models can be used to combine controllers. Two cases study are used to validate the new ANFIS structure. The knowledge sharing is evaluated in the second case study. It shows that just one modified ANFIS structure is necessary to combine linear models to identify, a nonlinear plant, and combine linear controllers to control this plant. The proposed method allows the usage of any identification and control techniques for local models and local controllers obtaining. It also reduces the complexity of ANFIS usage for identification and control. This work has prioritized simpler techniques for the identification and control systems to simplify the use of the method

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose a new approach to Interactive Digital Television (IDTV), aimed to explore the concepts of immersivity. Several architectures have been proposed to IDTV, but they did not explore coherently questions related to immersion. The goal of this thesis consists in defining formally what is immersion and interactivity for digital TV and how they may be used to improve user experience in this new televisive model. The approach raises questions such as the appropriate choice of equipment to assist in the sense of immersion, which forms of interaction between users can be exploited in the interaction-immersion context, if the environment where an immersive and interactive application is used can influence the user experience, and which new forms of interactivity between users, and interactivity among users and interactive applications can be explored with the use of immersion. As one of the goals of this proposal, we point out new solutions to these issues that require further studies. We intend to formalize the concepts that embrace interactivity in the brazilian system of digital TV. In an initial study, this definition is organized into categories or levels of interactivity. From this point are made analisis and specifications to achieve immersion using DTV. We pretend to make some case studies of immersive interactive applications for digital television in order to validate the proposed architecture. We also approach the use of remote devices anda proposal of middleware architecture that allows its use in conjunction with immersive interactive applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pobreza, como uma das manifestações da questão social , é elemento constitutivo do padrão de desenvolvimento capitalista, extremamente desigual, em que convivem acumulação e miséria. Nas últimas décadas, sob a égide do ideário neoliberal, verifica-se um incremento em políticas de combate à pobreza no Brasil, de caráter focalizado e compensatório, tanto por meio de ações diretas de transferência de renda, quanto pelo fortalecimento de serviços e programas voltados às populações pobres, com a estruturação do Sistema Único de Assistência Social, hierarquizado em Proteção Social Básica e Especial. A participação do psicólogo nas equipes profissionais do CRAS constitui um importante elemento para a discussão da inserção desse profissional no campo das políticas sociais no Brasil, considerando os limites estruturais postos pelo caráter compensatório dessas políticas, e a construção de estratégias que possam resultar em uma mudança efetiva nas condições de vida das camadas mais pobres da sociedade. Aliado a isso, por meio do ingresso na política de assistência social, um número significativo de profissionais psicólogos passa a atuar em cidades pequenas e médias, fora dos tradicionais centros urbanos, constituindo um movimento de interiorização da profissão . O objetivo do presente trabalho é analisar a ação profissional do psicólogo na assistência social no contexto nas políticas de combate à pobreza em municípios do interior do Rio Grande do Norte. Realizou-se entrevistas semiestruturadas com psicólogos atuantes nos CRAS de 17 municípios de pequeno e médio porte do estado. As informações foram sistematizadas com auxílio do software QDA Miner v. 3.2. A perspectiva defendida neste trabalho refere-se à funcionalidade das práticas psicológicas no contexto das políticas de combate à pobreza brasileiras na atualidade, ao reforçar os ideais neoliberais de naturalização da questão social e responsabilização dos indivíduos pela sua condição social, além de, em grande parte, desconsiderar as particularidades e singularidades que marcam os territórios de ação. Todavia, é possível depreender alguns modos de ação profissional que estão na contramão dos mais frequentemente encontrados nesse campo. Esses modos se revelam no cotidiano do CRAS como formas diferentes de compreensão do saber fazer profissional, resultados de um posicionamento político e de uma formação profissional que buscam romper com o tradicionalismo e conservadorismo da Psicologia e do campo da assistência social

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aspect Oriented approaches associated to different activities of the software development process are, in general, independent and their models and artifacts are not aligned and inserted in a coherent process. In the model driven development, the various models and the correspondence between them are rigorously specified. With the integration of aspect oriented software development (DSOA) and model driven development (MDD) it is possible to automatically propagate models from one activity to another, avoiding the loss of information and important decisions established in each activity. This work presents MARISA-MDD, a strategy based on models that integrate aspect-oriented requirements, architecture and detailed design, using the languages AOV-graph, AspectualACME and aSideML, respectively. MARISA-MDD defines, for each activity, representative models (and corresponding metamodels) and a number of transformations between the models of each language. These transformations have been specified and implemented in ATL (Atlas Definition Language), in the Eclipse environment. MARISA-MDD allows the automatic propagation between AOV-graph, AspectualACME, and aSideML models. To validate the proposed approach two case studies, the Health Watcher and the Mobile Media have been used in the MARISA-MDD environment for the automatic generation of AspectualACME and aSideML models, from the AOV-graph model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work proposed by Cleverton Hentz (2010) presented an approach to define tests from the formal description of a program s input. Since some programs, such as compilers, may have their inputs formalized through grammars, it is common to use context-free grammars to specify the set of its valid entries. In the original work the author developed a tool that automatically generates tests for compilers. In the present work we identify types of problems in various areas where grammars are used to describe them , for example, to specify software configurations, which are potential situations to use LGen. In addition, we conducted case studies with grammars of different domains and from these studies it was possible to evaluate the behavior and performance of LGen during the generation of sentences, evaluating aspects such as execution time, number of generated sentences and satisfaction of coverage criteria available in LGen

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web services are computational solutions designed according to the principles of Service Oriented Computing. Web services can be built upon pre-existing services available on the Internet by using composition languages. We propose a method to generate WS-BPEL processes from abstract specifications provided with high-level control-flow information. The proposed method allows the composition designer to concentrate on high-level specifi- cations, in order to increase productivity and generate specifications that are independent of specific web services. We consider service orchestrations, that is compositions where a central process coordinates all the operations of the application. The process of generating compositions is based on a rule rewriting algorithm, which has been extended to support basic control-flow information.We created a prototype of the extended refinement method and performed experiments over simple case studies