990 resultados para Knowledge Database


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tiedosta on tullut määräävä tekijä yrityksensuorituskyvylle. Yritykset hankkivat aktiivisesti uutta tietoa ulkoisesta ympäristöstään ja tallentavat sitä tietokantoihinsa. Uusi tieto on innovaatioiden ja uusien ideoiden peruselementti. Uudet ideat pitää myös kaupallistaa, jotta niiden avulla voidaan hankkia kilpailuetua. Absorptiivisen kapasiteetin malli yhdistää tiedon prosessointiin liittyvät kyvykkyydet, jotka vaikuttavat yrityksen kykyyn hyödyntää tietoa tehokkaasti. Ennen kuin tietoa voidaan käyttää uusien tuotteiden ja palveluiden luomiseen, täytyy sitä jakaa yrityksessä ja muuttaa se yrityksen toimintaa palvelevaksi. Aiemmissa tutkimuksissa innovaatiot ovat vahvasti liitetty yrityksen kykyyn uudistua. Tämä pro gradu -tutkielma tutkii sosiaalisten integraatiomekanismien vaikutusta potentiaalisen absorptiivisen kapasiteetin muuttamiseen toteutuneeksi absorptiiviseksi kapasiteetiksi. Yksilöiden ja osastojen välisen yhteistyön sekä luottamuksen vaikutus tiedon sisäistämiseen tutkittiin. Tutkielma pohjautuu monikansallisessa yrityksessä keväällä 2006 suoritettuun uudistumiskyky-tutkimukseen. Tutkielma keskittyy yrityksen kykyyn uudistua uuden tiedon ja innovaatioiden avulla. Tutkielma on kvantitatiivinen tapaustutkimus. Tutkielmassa tehtyjen havaintojen mukaan sosiaaliset integraatiomekanismit ovat tärkeitä uuden tiedon hyödyntämisessä. Tiedon eksplisiittyyden havaitaan vaikuttavan tiedon muuttamiseen yritykselle hyödylliseksi resurssiksi.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pro-gradu tutkielman tavoitteena on tutkia, miten yritykset tasapainoilevat tiedon jakamisen ja suojaamisen välillä innovaatioyhteistyöprojekteissa, ja miten sopimukset, immateriaalioikeudet ja luottamus voivat vaikuttaa tähän tasapainoon. Yhteistyössä yritysten täytyy jakaa tarpeellista tietoa kumppanilleen, mutta toisaalta niiden täytyy varoa, etteivät ne menetä ydinosaamiseensa kuuluvaa tietoa ja kilpailuetuaan. Yrityksillä on useita keinoja tietovuodon estämiseen. Tutkielmassa keskitytään patenttien, sopimusten ja liikesalaisuuksien käyttöön tietoa suojaavina mekanismeina. Kyseiset suojamekanismit vaikuttavat luottamukseen kumppaneiden välillä, ja täten myös näiden halukkuuteen jakaa tietoa kumppaneilleen. Jos kumppanit eivät jaa tarpeeksi tietoa toisilleen, voi yhteistyö epäonnistua. Sopimusten, immateriaalioikeuksien ja luottamuksen rooleja ja vuorovaikutusta tutkitaan kahdenvälisissä yhteistyöprojekteissa. Tutkielmassa esitellään neljä case-esimerkkiä, jotka on koottu suomalaisen metsätoimialan yrityksen haastatteluista.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[eng] The individual work of the student, in the European space for higher education, takes a particular relevance. At the same time, in this context, a complex challenge is presented to the professors, in the educational and formative work aimed at the students as well as in the organization of educational plans. Among the different activities that the system of ECTS comprises, we can highlight the preparation and accomplishment of examinations. This fact means to integrate into the learning process the individual and autonomous work that the students have to carry out to acquire the knowledge and to surpass the evaluation test. To achieve this objective, a database including multiple choice questions with three possible answers has been developed (similar to those composing part of the Pharmaceutical Technology test). This database has a design that facilitates its use by professors and students interested in this area and allows as the interactive evaluation of obligatory and optional subjects from the formal point of view, as a corporate recreational environment to make their employment more attractive for the students. The edition of the Web page of the Pharmaceutical Technology Teaching Innovation Group of the UB has been used to place this database within everybody"s reach. [spa] El trabajo individual del alumno adquiere, dentro del marco europeo de educación superior, una particular relevancia. Al mismo tiempo, en este contexto, a los profesores se nos presenta un reto complejo, tanto en nuestra labor docente y formativa del alumno, como en la organización de los planes docentes. De las distintas actividades que engloba el crédito europeo podemos destacar la preparación y realización de exámenes, hecho que supone integrar en el proceso de aprendizaje el trabajo individual y autónomo que realiza el estudiante para adquirir los conocimientos y superar las pruebas de evaluación. Para conseguir este objetivo se ha planteado la elaboración de una base de preguntas con tres respuestas posibles (del tipo de las que componen parte de los exámenes de Tecnología Farmacéutica), en forma de base de datos con un formato que facilite su utilización por parte de profesores y alumnos interesados en este ámbito y que permita tanto la evaluación interactiva de las asignaturas troncales y optativas desde el punto de vista formal, como su aplicación en un entorno lúdico corporativo para hacer más atrayente su empleo por los estudiantes. Se ha aprovechado la edición de la página Web del Grupo de Innovación Docente de Tecnología Farmacéutica de la UB para ponerlo al alcance de todos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use by police services and inquiring agencies of forensic data in an intelligence perspective is still fragmentary and to some extent ignored. In order to increase the efficiency of criminal investigation to target illegal drug trafficking organisations and to provide valuable information about their methods, it is necessary to include and interpret objective drug analysis results already during the investigation phase. The value of visual, physical and chemical data of seized ecstasy tablets, as a support for criminal investigation on a strategic and tactical level has been investigated. In a first phase different characteristics of ecstasy tablets have been studied in order to define their relevance, variation, correlation and discriminating power in an intelligence perspective. During 5 years, over 1200 cases of ecstasy seizures (concerning about 150000 seized tablets) coming from different regions of Switzerland (City and Canton of Zurich, Cantons Ticino, Neuchâtel and Geneva) have been systematically recorded. This turned out to be a statistically representative database including large and small cases. During the second phase various comparison and clustering methods have been tested and evaluated, on the type and relevance of tablet characteristics, thus increasing knowledge about synthetic drugs, their manufacturing and trafficking. Finally analytical methodologies have been investigated and formalised, applying traditional intelligence methods. In this context classical tools, which are used in criminal analysis (like the I2 Analyst Notebook, I2 Ibase, ?) have been tested and adapted to address the specific need of forensic drug intelligence. The interpretation of these links provides valuable information about criminal organisations and their trafficking methods. In the final part of this thesis practical examples illustrate the use and value of such information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Even though the research on innovation in services has expanded remarkably especially during the past two decades, there is still a need to increase understanding on the special characteristics of service innovation. In addition to studying innovation in service companies and industries, research has also recently focused more on services in innovation, as especially the significance of so-called knowledge intensive business services (KIBS) for the competitive edge of their clients, othercompanies, regions and even nations has been proved in several previous studies. This study focuses on studying technology-based KIBS firms, and technology andengineering consulting (TEC) sector in particular. These firms have multiple roles in innovation systems, and thus, there is also a need for in-depth studies that increase knowledge about the types and dimensions of service innovations as well as underlying mechanisms and procedures which make the innovations successful. The main aim of this study is to generate new knowledge in the fragmented research field of service innovation management by recognizing the different typesof innovations in TEC services and some of the enablers of and barriers to innovation capacity in the field, especially from the knowledge management perspective. The study also aims to shed light on some of the existing routines and new constructions needed for enhancing service innovation and knowledge processing activities in KIBS companies of the TEC sector. The main samples of data in this research include literature reviews and public data sources, and a qualitative research approach with exploratory case studies conducted with the help of the interviews at technology consulting companies in Singapore in 2006. These complement the qualitative interview data gathered previously in Finland during a larger research project in the years 2004-2005. The data is also supplemented by a survey conducted in Singapore. The respondents for the survey by Tan (2007) were technology consulting companies who operate in the Singapore region. The purpose ofthe quantitative part of the study was to validate and further examine specificaspects such as the influence of knowledge management activities on innovativeness and different types of service innovations, in which the technology consultancies are involved. Singapore is known as a South-east Asian knowledge hub and is thus a significant research area where several multinational knowledge-intensive service firms operate. Typically, the service innovations identified in the studied TEC firms were formed by several dimensions of innovations. In addition to technological aspects, innovations were, for instance, related to new client interfaces and service delivery processes. The main enablers of and barriers to innovation seem to be partly similar in Singaporean firms as compared to the earlier study of Finnish TEC firms. Empirical studies also brought forth the significance of various sources of knowledge and knowledge processing activities as themain driving forces of service innovation in technology-related KIBS firms. A framework was also developed to study the effect of knowledge processing capabilities as well as some moderators on the innovativeness of TEC firms. Especially efficient knowledge acquisition and environmental dynamism seem to influence the innovativeness of TEC firms positively. The results of the study also contributeto the present service innovation literature by focusing more on 'innovation within KIBs' rather than 'innovation through KIBS', which has been the typical viewpoint stressed in the previous literature. Additionally, the study provides several possibilities for further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The driving forces of technology and globalization continuously transform the business landscape in a way which undermines the existing strategies and innovations of organizations. The challenge for organizations is to establish such conditions where they are able to create new knowledge for innovative business ideas in interaction between other organizations and individuals. Innovation processes continuously need new external stimulations and seek new ideas, new information and knowledge locating more and more outside traditional organizational boundaries. In several studies, the early phases of the innovation process have been considered as the most critical ones. During these phases, the innovation process can emerge or conclude. External knowledge acquirement and utilization are noticed to be important at this stage of the innovation process giving information about the development of future markets and needs for new innovative businessideas. To make it possible, new methods and approaches to manage proactive knowledge creation and sharing activities are needed. In this study, knowledge creation and sharing in the early phases of the innovation process has been studied, and the understanding of knowledge management in the innovation process in an open and collaborative context advanced. Furthermore, the innovation management methods in this study are combined in a novel way to establish an open innovation process and tested in real-life cases. For these purposes two complementary and sequentially applied group work methods - the heuristic scenario method and the idea generation process - are examined by focusing the research on the support of the open knowledge creation and sharing process. The research objective of this thesis concerns two doctrines: the innovation management including the knowledge management, and the futures research concerning the scenario paradigm. This thesis also applies the group decision support system (GDSS) in the idea generation process to utilize the converged knowledge during the scenario process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thisthesis supplements the systematic approach to competitive intelligence and competitor analysis by introducing an information-processing perspective on management of the competitive environment and competitors therein. The cognitive questions connected to the intelligence process and also the means that organizational actors use in sharing information are discussed. The ultimate aim has been to deepen knowledge of the different intraorganizational processes that are used in acorporate organization to manage and exploit the vast amount of competitor information that is received from the environment. Competitor information and competitive knowledge management is examined as a process, where organizational actorsidentify and perceive the competitive environment by using cognitive simplification, make interpretations resulting in learning and finally utilize competitor information and competitive knowledge in their work processes. The sharing of competitive information and competitive knowledge is facilitated by intraorganizational networks that evolve as a means of developing a shared, organizational level knowledge structure and ensuring that the right information is in the right place at the right time. This thesis approaches competitor information and competitive knowledge management both theoretically and empirically. Based on the conceptual framework developed by theoretical elaboration, further understanding of the studied phenomena is sought by an empirical study. The empirical research was carried out in a multinationally operating forest industry company. This thesis makes some preliminary suggestions of improving the competitive intelligence process. It is concluded that managing competitor information and competitive knowledge is not simply a question of managing information flow or improving sophistication of competitor analysis, but the crucial question to be solved is rather, how to improve the cognitive capabilities connected to identifying and making interpretations of the competitive environment and how to increase learning. It is claimed that competitive intelligence can not be treated like an organizational function or assigned solely to a specialized intelligence unit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Goliath grouper, Epinephelus itajara, a large-bodied (similar to 2.5 m TL, > 400 kg) and critically endangered fish (Epinephelidae), is highly Vulnerable to overfishing. Although protected from fishing in many countries, its exploitation in Mexico is unregulated; a situation that puts its populations at risk. Fishery records of E. itajara are scarce, which prevents determination of its fishery status. This work aimed to elucidate the E itajara fishery in the northern Yucatan Peninsula by 1) analyzing available catch records and 2) interviewing veteran fishermen (local ecological knowledge) from two traditional landing sites: Dzilam de Bravo and Puerto Progreso. Historic fishery records from two fishing cooperatives were analyzed in order to elucidate the current situation and offer viable alternatives for conservation and management. Catches have decreased severely. Local knowledge obtained from fishermen represented a very important source of information for reconstructing the fisheries history of this species. Conservation measures that incorporate regional and international regulations on critically endangered fish species are suggested

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Desde la primavera de 2001, viene presentándose en España una nueva enfermedad conocida con el nombre de "torrao" o "cribado". Los síntomas que habitualmente presentan las plantas afectadas son una necrosis en la parte basal del foliolo que evoluciona a cribado, en los peciolos aparecen manchas longitudinales en ocasiones endurecidas que llegan a curvar los foliolos, y los frutos manifiestan manchas necróticas, deformaciones que finalmente lo rajan, quedando comercialmente inviables. Muestreos realizados desde su aparición han determinado la mayor incidencia de la enfermedad en la zona de Murcia, seguido de Canarias y en menor proporción Almería, y Alicante. Los resultados de los análisis realizados a las 369 muestras recogidas determinan que el 67% de las muestras analizadas eran positivas a Pepino mosaic virus (PepMV). En los ensayos de transmisión, únicamente mediante el injerto, se consiguió reproducir los síntomas de la enfermedad en dos casos, en el resto las plantas inoculadas e injertadas únicamente mostraban síntomas típicos de PepMV y los análisis realizados confirmaron este aspecto. A la vista de los resultados obtenidos, se diseñó un nuevo método de diagnóstico que ha permitido la caracterización del 89% de las muestras analizadas como aislado Chileno 2 de PepMV, recientemente publicado en el Gen Bank (Accesión number: DQ000985). De acuerdo con lo expuesto podría tratarse de uno de los agentes implicados en el desarrollo del síndrome junto con otros factores aún por determinar

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analyzing the type and frequency of patient-specific mutations that give rise to Duchenne muscular dystrophy (DMD) is an invaluable tool for diagnostics, basic scientific research, trial planning, and improved clinical care. Locus-specific databases allow for the collection, organization, storage, and analysis of genetic variants of disease. Here, we describe the development and analysis of the TREAT-NMD DMD Global database (http://umd.be/TREAT_DMD/). We analyzed genetic data for 7,149 DMD mutations held within the database. A total of 5,682 large mutations were observed (80% of total mutations), of which 4,894 (86%) were deletions (1 exon or larger) and 784 (14%) were duplications (1 exon or larger). There were 1,445 small mutations (smaller than 1 exon, 20% of all mutations), of which 358 (25%) were small deletions and 132 (9%) small insertions and 199 (14%) affected the splice sites. Point mutations totalled 756 (52% of small mutations) with 726 (50%) nonsense mutations and 30 (2%) missense mutations. Finally, 22 (0.3%) mid-intronic mutations were observed. In addition, mutations were identified within the database that would potentially benefit from novel genetic therapies for DMD including stop codon read-through therapies (10% of total mutations) and exon skipping therapy (80% of deletions and 55% of total mutations).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El objetivo de esta investigación es conocer la tipología de aportaciones, que se producen en el entorno virtual colaborativo Knowledge Forum y comprobar si está teniendo lugar un aprendizaje colaborativo a través del ordenador (CSCL). Las contribuciones a los diferentes 30 foros han sido analizados y categorizados usando un esquema de codificación en base a las scaffolds o andamiajes que dicho entorno proporciona. Los resultados muestran que en conjunto los 308 estudiantes universitarios aportan nueva información y opinan, pero hay escasez de mensajes con diferentes opiniones que lleven a la discusión y a intercambios de puntos de vista distintos.