1000 resultados para Estudos de caso múltiplo
Resumo:
Esta tese tem por objetivo examinar as características do processo de decisão em que credores optam pela recuperação judicial ou liquidação da empresa em dificuldade financeira. O trabalho está dividido em quatro capítulos. No segundo capítulo, apresenta-se, de forma sistematizada, referencial teórico e evidências empíricas para apontar resultados importantes sobre estudos desenvolvidos nas áreas de recuperação de empresas e falência. O capítulo também apresenta três estudos de caso com o propósito de mostrar a complexidade de cada caso no que diz respeito à concentração de recursos, conflito de interesse entre as classes de credores e a decisão final sobre a aprovação ou rejeição do plano de recuperação judicial. No terceiro capítulo, analisam-se os determinantes do atraso pertinente à votação do plano de recuperação judicial. O trabalho propõe um estudo empírico dos atrasos entre 2005 e 2014. Os resultados sugerem que: (i) maior concentração da dívida entre as classes de credores possui relação com atrasos menores; (ii) maior quantidade de bancos para votar o plano de recuperação judicial possui relação com maiores atrasos; (iii) o atraso médio na votação diminui quando apenas uma classe de credores participa da votação do plano; (iv) credores trabalhistas e com garantia real atrasam a votação quando o valor dos ativos para garantir a dívida em caso de liquidação é maior; (v) o atraso médio na votação é maior em casos de pior desempenho do setor de atuação do devedor, sendo solicitado pelas classes quirografária e com garantia real; e (vi) a proposta de venda de ativos é o principal tópico discutido nas reuniões de votação do plano nos casos em que o atraso na votação é maior. Por fim, no quarto capítulo, apresenta-se evidência sobre a votação dos credores e a probabilidade de aprovação do plano de recuperação judicial. Os resultados sugerem que: (i) credores trabalhistas estão propensos a aprovar o plano de recuperação mesmo quando o plano é rejeitado pelas demais classes; (ii) planos com propostas de pagamento mais heterogêneas para as três classes de credores possuem menor chance de serem aceitos; (iii) a chance de aprovação do plano diminui nos casos em que mais credores quirografários participam da recuperação; e (iv) planos com proposta de venda de ativos possuem maior chance de serem aprovados. Finalmente, maior concentração da dívida na classe com garantia real diminui a chance de aprovação do plano, e o contrário ocorre na classe quirografária.
Resumo:
Since some years, mobile technologies in healthcare (mHealth) stand for the transformational force to improve health issues in low- and middle-income countries (LMICs). Although several studies have identified the prevailing issue of inconsistent evidence and new evaluation frameworks have been proposed, few have explored the role of entrepreneurship to create disruptive change in a traditionally conservative sector. I argue that improving the effectiveness of mHealth entrepreneurs might increase the adoption of mHealth solutions. Thus, this study aims at proposing a managerial model for the analysis of mHealth solutions from the entrepreneurial perspective in the context of LMICs. I identified the Khoja–Durrani–Scott (KDS) framework as theoretical basis for the managerial model, due to its explicit focus on the context of LMICs. In the subsequent exploratory research I, first, used semi-structured interviews with five specialists in mHealth, local healthcare systems and investment to identify necessary adaptations to the model. The findings of the interviews proposed that especially the economic theme had to be clarified and an additional entrepreneurial theme was necessary. Additionally, an evaluation questionnaire was proposed. In the second phase, I applied the questionnaire to five start-ups, operating in Brazil and Tanzania, and conducted semi-structured interviews with the entrepreneurs to gain practical insights for the theoretical development. Three of five entrepreneurs perceived that the results correlated with the entrepreneurs' expectations of the strengths and weaknesses of the start-ups. Main shortcomings of the model related to the ambiguity of some questions. In addition to the findings for the model, the results of the scores were analyzed. The analysis suggested that across the participating mHealth start-ups the ‘behavioral and socio-technical’ outcomes were the strongest and the ‘policy’ outcomes were the weakest themes. The managerial model integrates several perspectives, structured around the entrepreneur. In order to validate the model, future research may link the development of a start-up with the evolution of the scores in longitudinal case studies or large-scale tests.
Resumo:
A indústria de serviços online é caracterizada por um volume alto de Fusões e Aquisições no período de 2005 a 2015. As líderes de mercado, Apple, Google e Microsoft, incorporaram essa forma de crescimento inorgânico em suas estratégias corporativas. Essa tese examina as atividades de Fusões e Aquisições dessas três empresas. Consequentemente, ela tem foco em dois aspectos principais. Primeiro, existe o objetivo de saciar uma escassez na literatura acadêmica, no que se diz respeito ao estabelecimento de uma conexão entre a estratégia corporativa dessas empresas e as decisões tomadas de Fusões e Aquisições. Segundo, há também o objetivo de estimar possíveis futuros desenvolvimentos no setor. Através de uma análise de conteúdo qualitativa das publicações das empresas, relatórios de análise de mercado, e outros conteúdos de terceiros, estudos de caso foram desenvolvidos. Os resultados mostram o processo de posicionamento estratégico por parte da Apple, Google e Microsoft, dentro do mercado de serviços online, entre os anos de 2005 e 2015. As recorrentes fusões e aquisições são analisadas, no que se diz respeito as estratégias corporativas dessas empresas e a responsividade perante as atividades de seus competidores. Os resultados evidenciam atividades agressivas de Fusões e Aquisições em grupos estratégicos em comum entre as três empresas, especialmente no mercado de aparelhos de comunicação móvel e serviços de comunicação.
Resumo:
This thesis aims to describe and demonstrate the developed concept to facilitate the use of thermal simulation tools during the building design process. Despite the impact of architectural elements on the performance of buildings, some influential decisions are frequently based solely on qualitative information. Even though such design support is adequate for most decisions, the designer will eventually have doubts concerning the performance of some design decisions. These situations will require some kind of additional knowledge to be properly approached. The concept of designerly ways of simulating focuses on the formulation and solution of design dilemmas, which are doubts about the design that cannot be fully understood nor solved without using quantitative information. The concept intends to combine the power of analysis from computer simulation tools with the capacity of synthesis from architects. Three types of simulation tools are considered: solar analysis, thermal/energy simulation and CFD. Design dilemmas are formulated and framed according to the architect s reflection process about performance aspects. Throughout the thesis, the problem is investigated in three fields: professional, technical and theoretical fields. This approach on distinct parts of the problem aimed to i) characterize different professional categories with regards to their design practice and use of tools, ii) investigate preceding researchers on the use of simulation tools and iii) draw analogies between the proposed concept, and some concepts developed or described in previous works about design theory. The proposed concept was tested in eight design dilemmas extracted from three case studies in the Netherlands. The three investigated processes are houses designed by Dutch architectural firms. Relevant information and criteria from each case study were obtained through interviews and conversations with the involved architects. The practical application, despite its success in the research context, allowed the identification of some applicability limitations of the concept, concerning the architects need to have technical knowledge and the actual evolution stage of simulation tools
Resumo:
This paper examines, through case studies, the organization of the production process of architectural projects in architecture offices in the city of Natal, specifically in relation to building projects. The specifics of the design process in architecture, the production of the project in a professional field in Natal, are studied in light of theories of design and its production process. The survey, in its different phases, was conducted between March 2010 and September 2012 and aimed to identify, understand, and analyze comparatively, by mapping the design process, the organization of production of building projects in two offices in Natal, checking as well the relationships of their agents during the process. The project was based on desk research and exploration, adopting, for both, data collection tools such as forms, questionnaires, and interviews. With the specific aim of mapping the design process, we adopted a technique that allows obtaining the information directly from employee agents involved in the production process. The technique consisted of registering information by completing daily, during or at the end of the workday, an individual virtual agenda, in which all agent collaborators described the tasks performed. The data collected allowed for the identification of the organizational structure of the office, its hierarchy, the responsibilities of agents, as well as the tasks performed by them during the two months of monitoring at each office. The research findings were based on analyses of data collected in the two offices and on comparative studies between the results of these analyses. The end result was a diagnostic evaluation that considered the level of organization and elaborated this perspective, as well as proposed solutions aimed at improving both the organization of the process and the relationships between the agents under the lens analyzed
Resumo:
Steam injection is a method usually applied to very viscous oils and consists of injecting heat to reduce the viscosity and, therefore, increase the oil mobility, improving the oil production. For designing a steam injection project it is necessary to have a reservoir simulation in order to define the various parameters necessary for an efficient heat reservoir management, and with this, improve the recovery factor of the reservoir. The purpose of this work is to show the influence of the coupled wellbore/reservoir on the thermal simulation of reservoirs under cyclic steam stimulation. In this study, the methodology used in the solution of the problem involved the development of a wellbore model for the integration of steam flow model in injection wellbores, VapMec, and a blackoil reservoir model for the injection of cyclic steam in oil reservoirs. Thus, case studies were developed for shallow and deep reservoirs, whereas the usual configurations of injector well existing in the oil industry, i.e., conventional tubing without packer, conventional tubing with packer and insulated tubing with packer. A comparative study of the injection and production parameters was performed, always considering the same operational conditions, for the two simulation models, non-coupled and a coupled model. It was observed that the results are very similar for the specified well injection rate, whereas significant differences for the specified well pressure. Finally, on the basis of computational experiments, it was concluded that the influence of the coupled wellbore/reservoir in thermal simulations using cyclic steam injection as an enhanced oil recovery method is greater for the specified well pressure, while for the specified well injection rate, the steam flow model for the injector well and the reservoir may be simulated in a non- coupled way
Resumo:
Analyzes the development experience in the Territories of Mato Grande and Sertão do Apodi in the state of Rio Grande do Norte, evaluating the actions of the National Program for Strengthening Family Agriculture, specifically the line of infrastructure (PRONAF-INFRA), and the National Program for Sustainable Development of Rural Territories (PRONAT) in these territories. Summarizes the various rural development approaches and takes the theoretical assumptions of territorial development, the concept of constructed territory and market-plan territory, further the cycle model to analyze public policies selected these experiences. Thus, we propose to test the hypothesis that most of the actions implemented would lead to the formation of market-plan territories, in other words, perceived only as a platform for the presentation of projects. The literature and documents, combined with case studies, interviews and direct observation of the meetings of committees, showed that, despite two boards are under the same laws, rules and formal regulations, have clear differences when considering the theory and concepts that were used as reference. The Apodi s territory is closer to a constructed space thus the search for a broader agenda, more autonomous and more appropriate to the reality experienced by local actors. On other hand the Territory of Mato Grande had the characteristics of a market-plan territory more present. As the result, the territory of Sertão do Apodi accesses not only as part of a greater number of policies and funding sources, ensuring a greater and more diverse investment volume than the territory of Mato Grande. Despite these differences, studies have shown that territorial boards surveyed are still far from becoming the main forum for managing the development from conception planning socially constructed. Showed, finally, that territorial development strategy is relevant, but requires a long walk and a deep and continuous learning process to be successfully implemented in rural areas of Northeast Brazil
Resumo:
In this work a modification on ANFIS (Adaptive Network Based Fuzzy Inference System) structure is proposed to find a systematic method for nonlinear plants, with large operational range, identification and control, using linear local systems: models and controllers. This method is based on multiple model approach. This way, linear local models are obtained and then those models are combined by the proposed neurofuzzy structure. A metric that allows a satisfactory combination of those models is obtained after the structure training. It results on plant s global identification. A controller is projected for each local model. The global control is obtained by mixing local controllers signals. This is done by the modified ANFIS. The modification on ANFIS architecture allows the two neurofuzzy structures knowledge sharing. So the same metric obtained to combine models can be used to combine controllers. Two cases study are used to validate the new ANFIS structure. The knowledge sharing is evaluated in the second case study. It shows that just one modified ANFIS structure is necessary to combine linear models to identify, a nonlinear plant, and combine linear controllers to control this plant. The proposed method allows the usage of any identification and control techniques for local models and local controllers obtaining. It also reduces the complexity of ANFIS usage for identification and control. This work has prioritized simpler techniques for the identification and control systems to simplify the use of the method
Resumo:
In this work, we propose a new approach to Interactive Digital Television (IDTV), aimed to explore the concepts of immersivity. Several architectures have been proposed to IDTV, but they did not explore coherently questions related to immersion. The goal of this thesis consists in defining formally what is immersion and interactivity for digital TV and how they may be used to improve user experience in this new televisive model. The approach raises questions such as the appropriate choice of equipment to assist in the sense of immersion, which forms of interaction between users can be exploited in the interaction-immersion context, if the environment where an immersive and interactive application is used can influence the user experience, and which new forms of interactivity between users, and interactivity among users and interactive applications can be explored with the use of immersion. As one of the goals of this proposal, we point out new solutions to these issues that require further studies. We intend to formalize the concepts that embrace interactivity in the brazilian system of digital TV. In an initial study, this definition is organized into categories or levels of interactivity. From this point are made analisis and specifications to achieve immersion using DTV. We pretend to make some case studies of immersive interactive applications for digital television in order to validate the proposed architecture. We also approach the use of remote devices anda proposal of middleware architecture that allows its use in conjunction with immersive interactive applications
Resumo:
O desenvolvimento de Sistemas de Gestão da Segurança e Saúde no Trabalho (SGSST) ganha um significado cada vez mais importante no desempenho das empresas, pois, por meio deles, é possível obter a promoção da saúde e satisfação dos trabalhadores e a redução dos riscos de acidentes. No entanto, para que um SGSST obtenha bons resultados, as empresas precisam estar atentas às dificuldades comumente encontradas durante o seu processo de implantação, procurando solucioná-las de maneira antecipada e estruturada. Pelo exposto, este trabalho tem como principal objetivo apresentar diretrizes, baseadas no referencial teórico e nos resultados dos estudos de caso realizados, para implantação de SGSSTs em empresas fabricantes de baterias automotivas. Para o seu desenvolvimento adotou-se o método de pesquisa qualitativa a partir da realização de dois estudos de caso em empresas fabricantes de baterias automotivas localizadas na cidade de Bauru. Os instrumentos de coleta de dados foram entrevistas semiestruturadas, análise de documentos e observação in loco. Ao final do artigo, são propostas diretrizes relacionadas aos seguintes elementos: alta direção, estratégia organizacional, cultura organizacional, departamento de Segurança e Saúde do Trabalho (SST), técnicos de SST, recursos humanos, treinamento, equipes multidisciplinares, comunicação interna, resistência à mudança, indicadores de desempenho, ferramentas gerenciais para solução de problemas de SST, gestão de projetos, recompensas e incentivos, e integração do sistema.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism
Resumo:
The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications
Resumo:
Aspect Oriented approaches associated to different activities of the software development process are, in general, independent and their models and artifacts are not aligned and inserted in a coherent process. In the model driven development, the various models and the correspondence between them are rigorously specified. With the integration of aspect oriented software development (DSOA) and model driven development (MDD) it is possible to automatically propagate models from one activity to another, avoiding the loss of information and important decisions established in each activity. This work presents MARISA-MDD, a strategy based on models that integrate aspect-oriented requirements, architecture and detailed design, using the languages AOV-graph, AspectualACME and aSideML, respectively. MARISA-MDD defines, for each activity, representative models (and corresponding metamodels) and a number of transformations between the models of each language. These transformations have been specified and implemented in ATL (Atlas Definition Language), in the Eclipse environment. MARISA-MDD allows the automatic propagation between AOV-graph, AspectualACME, and aSideML models. To validate the proposed approach two case studies, the Health Watcher and the Mobile Media have been used in the MARISA-MDD environment for the automatic generation of AspectualACME and aSideML models, from the AOV-graph model
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies