912 resultados para Meyer–Konig and Zeller Operators
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
Tässä työssä tutkitaan muuttuvanopeuksisen tuulivoimakäytön verkkoliityntöjä, verkkomääräyksiä ja mittausmenetelmiä Euroopassa. Yleiset tuulivoimalaitoksen verkkomääräyksiin liittyvät asiat on selitetty standardisarjassa IEC 61000 ja mittauksiin liittyvät säännökset standardissa IEC 61400-21. Työssä selvitetään Euroopan yleiset verkkomääräykset ja vertaillaan niitä keskenään, sekä määritetään tärkeimmät mittaukset. Lisäksi esitellään eräs standardimittauksiin soveltuva mittalaite ja analysoidaan täystehomuokkaimen suorituskykyä tuulivoimapuistossa. Työssä havaittiin, että standardin mukaisten mittausten toteuttaminen ja analysointi on hyvin haastava prosessi. Verkkomääräysten eroavaisuudet aiheuttavat hankaluuksia tuulivoimalan valmistajan ja verkko-operaattorien välillä. Täten myös konvertterivalmistajan täytyy olla selvillä käytössä olevista verkkomääräyksistä ja standardeista.
Resumo:
Diplomityössä on tutkittu tieliikennetelematiikkalaitteiden huoltoon liittyvän verkoston toiminnan kehittämistä. Tutkittuun verkostoon kuuluvat Valtti-yksikkö (Valtakunnallinen liikennetelematiikka ja liikenteenhallinnan tietopalvelut), ELYkeskukset, tieliikennekeskukset, hallinta- ja valvontatoimija sekä huoltotoimijat. Tarkoituksena oli selvittää, kuinka verkoston toiminnan tehokkuutta voidaan parantaa. Tähän sisältyi verkoston toimijoiden tunnistaminen sekä toimijoiden roolien ja vastuiden määrittäminen. Lisäksi on tutkittu kuinka verkoston toimijoiden välistä tiedonkulkua voidaan parantaa ja kuinka ITIL-viitekehys soveltuu verkoston toimintaan. Diplomityötä varten on haastateltu verkoston toimijoiden edustajia heidän näkemyksistään verkoston toiminnasta ja sen kehittämistarpeista.
Resumo:
Työn tavoitteena oli kuvata kotimaisia lääkemarkkinoita koko arvoketjun alueelta, lähtien lääketehtaista ja päättyen vähittäismyyntiin tai sairaalajakeluun. Lisäksi työssä kuvattiin julkisen terveydenhuollon ostopalveluiden tilaa erityisesti logistiikkapalveluiden osalta. Markkinoiden kuvausta hyödynnettiin työssä Suomalaisen logistiikkayrityksen markkinapotentiaalin määrittämisessä kyseisille markkinoille. Työn toinen tavoite oli tarkastella markkinoita tulevaisuustutkimuksen työkaluja käyttämällä, ja luoda kyseisille markkinoille skenaarioita 10–15 vuoden päähän nykyhetkestä. Skenaarioiden ja markkinapotentiaaliselvityksen perusteella on työn lopputulemana luotu toimintaehdotuksia asiakasyritykselle koskien selvityksen alla olevien markkinoiden houkuttelevuutta yrityksen kannalta. Selvityksessä käytettävä tieto on kerätty julkisten raporttien ja selvitysten pohjalta, sekä tuottamalla laajamittainen haastattelututkimus yksityisten ja julkisten toimijoiden kautta läpi kotimaisen lääkkeen ja sairaalatarvikkeen arvoketjun. Haastattelututkimus on työn kannalta merkittävin tiedon lähde ja työssä hyödynnetty hiljaisten signaalien tulkinta perustuu haastatteluiden kautta saatuihin tietoihin.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.
Resumo:
Le rôle intégratif que la Cour de justice des Communautés européennes (CJCE) a joué dans la construction européenne est bien connu et très documenté. Ce qui l'est moins ce sont les raisons qui l'ont motivé, et le motivent encore. Si certains se sont déjà penchés sur cette question, un aspect a néanmoins été complètement négligé, celui de l'influence qu'a pu avoir à cet égard le contexte conjoncturel sur la jurisprudence communautaire et plus précisément sur l'orientation que la Cour a choisi de lui donner. Dans ce cadre, les auditoires de la Cour ont un rôle déterminant. Pour s'assurer d'une bonne application de ses décisions, la Cour est en effet amenée à prendre en considération les attentes des États membres, des institutions européennes, de la communauté juridique (tribunaux nationaux, avocats généraux, doctrine et praticiens) et des ressortissants européens (citoyens et opérateurs économiques). Aussi, à la question du pourquoi la CJCE décide (ou non) d'intervenir, dans le domaine de la libre circulation des marchandises, en faveur de l'intégration économique européenne, j'avance l'hypothèse suivante: l'intervention de la Cour dépend d'une variable centrale : les auditoires, dont les attentes (et leur poids respectif) sont elles-mêmes déterminées par le contexte conjoncturel. L'objectif est de faire ressortir l'aspect plus idéologique de la prise de décision de la Cour, largement méconnu par la doctrine, et de démontrer que le caractère fluctuant de la jurisprudence communautaire dans ce domaine, et en particulier dans l'interprétation de l'article 28 du traité CE, s'explique par la prise en compte par la Cour des attentes de ses auditoires, lesquels ont majoritairement adhéré à l'idéologie néolibérale. Afin de mieux saisir le poids - variable - de chaque auditoire de la Cour, j'apprécierai, dans une première partie, le contexte conjoncturel de la construction européenne de 1990 à 2006 et notamment le virage néolibéral que celle-ci a opéré. L'étude des auditoires et de leur impact sur la jurisprudence fera l'objet de la seconde partie de ma thèse. Je montrerai ainsi que la jurisprudence communautaire est une jurisprudence « sous influence », essentiellement au service de la réalisation puis de l'approfondissement du marché intérieur européen.
Resumo:
Combinational digital circuits can be evolved automatically using Genetic Algorithms (GA). Until recently this technique used linear chromosomes and and one dimensional crossover and mutation operators. In this paper, a new method for representing combinational digital circuits as 2 Dimensional (2D) chromosomes and suitable 2D crossover and mutation techniques has been proposed. By using this method, the convergence speed of GA can be increased significantly compared to the conventional methods. Moreover, the 2D representation and crossover operation provides the designer with better visualization of the evolved circuits. In addition to this, a technique to display automatically the evolved circuits has been developed with the help of MATLAB
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This dissertation studies the innovative technological capabilities available in the merger and acquisitions processes and the relationship between these processes with the technological capabilities accumulation to get convergence of technology and services. This study was examined in fourteen companies of the telecommunications industry during 2002 to 2007. Starting on 1990 there were from one end a profusion of studies on the technological capabilities as source of competitive advantages; from another end there are studies on merger and acquisitions with the objective to evaluate the motivations derived from technological factors and stimulation to the competition and the opening of the market. However few of the empirical studies of long stated period that examine the correlation of these events in the industry of telecommunications under the optics of the technological qualification in the level of the companies and for the strategic perspective of enterprise based on the dynamics abilities. An analytical framework already available in the literature was used to describe the contribution of the merger and acquisitions processes for the accumulation of innovative technological capabilities in the studied companies. However the framework was adapted specifically for the industry of Telecommunications. This dissertation also studies the importance of the strategic merger and acquisitions as organizational form in the complementation of technological capability for external sources. Such empirical evidences had been collected from information and data bases published for the own companies who had been examined in this dissertation. Regarding the results, it was found that: 1. In terms of participation with ingress technological capabilities in strategic merger and acquisitions the equipment manufacturers had entered with 71% to 55 of the technological capabilities and the service operator company had entered with 61% to 71 technological capabilities. 2. In terms of implications of the merger and acquisitions for the configuration of resultant technologic capabilities, it was found that the equipment manufacturers had increased 31% the ratio of convergence of technology and the operators of services had increased 4% the ratio for the change in the organizational structure. 3. Regarding the accumulation technological capability to obtain convergence of technology and services was verified the increase these technological capabilities after the merger and acquisitions process in the companies studied. Considering the limitation of this study, the evidences found in this dissertation suggest that the companies use the processes of strategic merger and acquisitions to search for external complementation of their knowledge base to compete in the globalization market. The result demonstrates that this movement has implied in an alteration and accumulation of capability from organization on innovative technological activities regarding the convergence of technology and services.
Resumo:
A convergência entre serviços tipicamente prestados por empresas de telecomunicações fixas e serviços tipicamente prestados por operadoras móveis é um fenômeno que acontece mundialmente há mais de dez anos. A convergência entre a indústria de telefonia fixa e móvel, como fenômeno evolutivo, pode ter três principais forças-motrizes: a simples evolução tecnológica, baseada, por exemplo, na digitalização das telecomunicações; a junção entre um viés de mercado, propiciado pelo alto nível concorrencial, e a busca por inovações tecnológicas; ou uma simples convergência entre ativos das diferentes indústrias, buscando uma aplicação mais eficiente do capital. Mundialmente, já são conhecidos inúmeros serviços convergentes de telecomunicações, embora a convergência em telecomunicações no Brasil ainda possa ser considerada incipiente. Considerando o relevante volume de faturamento agregado do mercado brasileiro de telecomunicações fixas e móveis e o elevado nível concorrencial, o estudo e aplicação de soluções convergentes podem representar a geração de diferencial competitivo. Nesse sentido, o presente trabalho propõe-se a avaliar as principais empresas de telecomunicações do Brasil e seus respectivos recursos ou serviços convergentes, segundo o framework VRIO, desenvolvido por Jay Barney, fundamentada na visão estratégica baseada em recursos (RBV- Resource Based View).
Resumo:
Este trabalho descreve algumas das soluções atualmente adotadas pelos tribunais federais para a gravação de audiência, bem como indica a oportunidade que o ambiente de interatividade da TV Digital Brasileira oferece para uma proposta de modelo de documento eletrônico de escritório que sirva de suporte para o resultado da gravação de audiência (texto, som e imagem), bem como possa contribuir para a mudança de paradigma dos atuais sistemas processuais (softwares). O objetivo é estabelecer um padrão fundamentado em uma política pública (Governo Eletrônico Brasileiro e Sistema Brasileiro de TV Digital Terrestre), onde não existam restrições comerciais quanto ao uso das novas tecnologias de comunicação e informação no que se refere ao mínimo para se privilegiar a inclusão social sem perda de eficiência. O trabalho é formado por dois tipos de conteúdo: parte textual e parte digital. A parte textual contém o resultado de uma pesquisa realizada junto aos tribunais federais, bem como apresenta os principais pontos do Governo Eletrônico Brasileiro e do Sistema Brasileiro de TV Digital Terrestre. Ainda descreve a estrutura montada na elaboração e realização da parte digital. Por sua vez, a parte digital reúne o material utilizado para a apresentação de protótipos (vídeos e exemplos de aplicações), para demonstrar as possibilidades de interatividade da TV Digital Brasileira e dos benefícios que os jurisdicionados e os operadores do Direito alcançariam com a proposta. Palavras-
Requalificação de ativo público em obsolescência tecnológica: a ferrovia tronco centro de Pernambuco
Resumo:
O estudo objetivou levantar as possibilidades de requalificação da Linha Tronco Centro de Pernambuco (LTCPE), um ativo ferroviário secular, em bitola métrica, com 608 km de extensão, desenvolvidos ao longo da linha dorsal do Estado de Pernambuco. Sua relevância se justifica pela solicitação da Concessionária ao Governo federal de devolução do patrimônio ferroviário. A operadora destaca que a linha ferroviária será substituída por outra linha férrea, a Ferrovia Nova Transnordestina, em bitola larga, de alto desempenho, que liga os estados de Pernambuco, Piauí e Ceará. Ambas as ferrovias desenvolvem um longo paralelismo em toda a extensão da LTCPE. Pretendeu-se, pois, identificar outras utilidades para o referido ramal ferroviário que não a de transporte de carga, vez que esta requalificação não deveria se dar em posição concorrencial com a Transnordestina. A pesquisa foi desenvolvida com base na compreensão de seu contexto histórico, da análise do quadro nacional do setor e na análise dos diversos ambientes socioeconômicos em que está inserido esse ramal ferroviário. Para obtenção dos resultados, foram aplicados questionários com profissionais dos setores de serviços públicos de planejamento e logística, operadores e outros ligados à consultoria e engenharia. Com base no rol de intervenientes identificados, foi construída uma Matriz Institucional em que se apresentou o caminho crítico de ação e as interrelações entre os intervenientes públicos. Na mesma perspectiva, foi elaborada uma Matriz de SWOT (Strengths, Weaknesses, Opportunities e Threats) de forma a articular o conjunto de dados e ações, indicando atividades e inversões financeiras que subsidiarão os estudos técnicos para a requalificação. Como resultados, o estudo identificou os trechos requalificáveis, sua destinação e novas alternativas de uso do patrimônio ferroviário remanescente.