917 resultados para Creation of the information


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The present study offers a novel methodological contribution to the study of the configuration and dynamics of research groups, through a comparative perspective of the projects funded (inputs) and publication co-authorships (output). Method: A combination of bibliometric techniques and social network analysis was applied to a case study: the Departmento de Bibliotecología (DHUBI), Universidad Nacional de La Plata, Argentina, for the period 2000-2009. The results were interpreted statistically and staff members of the department, were interviewed. Results: The method makes it possible to distinguish groups, identify their members and reflect group make-up through an analytical strategy that involves the categorization of actors and the interdisciplinary and national or international projection of the networks that they configure. The integration of these two aspects (input and output) at different points in time over the analyzed period leads to inferences about group profiles and the roles of actors. Conclusions: The methodology presented is conducive to micro-level interpretations in a given area of study, regarding individual researchers or research groups. Because the comparative input-output analysis broadens the base of information and makes it possible to follow up, over time, individual and group trends, it may prove very useful for the management, promotion and evaluation of science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chromatographic fractionation of the cytotoxic n-hexane extract of Hopea odorata Roxb. leaves led to the isolation of eight lupane triterpenes, which constitutes the first report of lupane-type triterpenes from this plant source. Furthermore, 3,30-dioxolup-20(29)-en-28-oic acid (6) was isolated for the first time from a natural source. Their structures were determined on the basis of spectroscopic methods, including 2D NMR analysis, and by comparison of their spectral data with literature values. Complete NMR assignments of the 1H and 13C NMR data were achieved for all compounds. Finally, the cytotoxic activities of the isolated compounds against four human cell lines (PC3, MDA-MB-231, HT-29 and HCT116) was also reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article provides an analysis of how banks determine levels of information production when they are in imperfect competition and there is a condition of information asymmetry between borrowers and banks. Specifically, the study concentrates on information production activities of banks in duopoly where they simultaneously determine intensity of pre-loan screening as well as interest rates. The preliminary model of this paper illustrates that due to strategic complementarities between banks, banking competition can result in inferior equilibrium out of multiple equilibria and insufficient information production. Policymakers must take into account the possible adverse effects of competition-enhancing policies on information production activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an evaluation of a spoken language dialogue system with a module for the management of userrelated information, stored as user preferences and privileges. The flexibility of our dialogue management approach, based on Bayesian Networks (BN), together with a contextual information module, which performs different strategies for handling such information, allows us to include user information as a new level into the Context Manager hierarchy. We propose a set of objective and subjective metrics to measure the relevance of the different contextual information sources. The analysis of our evaluation scenarios shows that the relevance of the short-term information (i.e. the system status) remains pretty stable throughout the dialogue, whereas the dialogue history and the user profile (i.e. the middle-term and the long-term information, respectively) play a complementary role, evolving their usefulness as the dialogue evolves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Which is the economic value of personal information? -How the exchange of information is benefiting society and the economy -How companies create value from personal information (by providing new services or servicing better an existing need). -The mechanisms by which personal information exchange creates economic value - How the level of privacy protection influences value creation in different markets

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neuronal growth is a complex process involving many intra- and extracellular mechanisms which are collaborating conjointly to participate to the development of the nervous system. More particularly, the early neocortical development involves the creation of a multilayered structure constituted by neuronal growth (driven by axonal or dendritic guidance cues) as well as cell migration. The underlying mechanisms of such structural lamination not only implies important biochemical changes at the intracellular level through axonal microtubule (de)polymerization and growth cone advance, but also through the directly dependent stress/stretch coupling mechanisms driving them. Efforts have recently focused on modeling approaches aimed at accounting for the effect of mechanical tension or compression on the axonal growth and subsequent soma migration. However, the reciprocal influence of the biochemical structural evolution on the mechanical properties has been mostly disregarded. We thus propose a new model aimed at providing the spatially dependent mechanical properties of the axon during its growth. Our in-house finite difference solver Neurite is used to describe the guanosine triphosphate (GTP) transport through the axon, its dephosphorylation in guanosine diphosphate (GDP), and thus the microtubules polymerization. The model is calibrated against experimental results and the tensile and bending mechanical stiffnesses are ultimately inferred from the spatially dependent microtubule occupancy. Such additional information is believed to be of drastic relevance in the growth cone vicinity, where biomechanical mechanisms are driving axonal growth and pathfinding. More specifically, the confirmation of a lower stiffness in the distal axon ultimately participates in explaining the controversy associated to the tensile role of the growth cone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spanish wheat (Triticum spp.) landraces have a considerable polymorphism, containing many unique alleles, relative to other collections. The existence of a core collection is a favored approach for breeders to efficiently explore novel variation and enhance the use of germplasm. In this study, the Spanish durum wheat (Triticum turgidum L.) core collection (CC) was created using a population structure–based method, grouping accessions by subspecies and allocating the number of genotypes among populations according to the diversity of simple sequence repeat (SSR) markers. The CC of 94 genotypes was established, which accounted for 17% of the accessions in the entire collection. An alternative core collection (CH), with the same number of genotypes per subspecies and maximizing the coverage of SSR alleles, was assembled with the Core Hunter software. The quality of both core collections was compared with a random core collection and evaluated using geographic, agromorphological, and molecular marker data not previously used in the selection of genotypes. Both core collections had a high genetic representativeness, which validated their sampling strategies. Geographic and agromorphological variation, phenotypic correlations, and gliadin alleles of the original collection were more accurately depicted by the CC. Diversity arrays technology (DArT) markers revealed that the CC included genotypes less similar than the CH. Although more SSR alleles were retained by the CH (94%) than by the CC (91%), the results showed that the CC was better than CH for breeding purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document is the result of a process of web development to create a tool that will allow to Cracow University of Technology consult, create and manage timetables. The technologies chosen for this purpose are Apache Tomcat Server, My SQL Community Server, JDBC driver, Java Servlets and JSPs for the server side. The client part counts on Javascript, jQuery, AJAX and CSS technologies to perform the dynamism. The document will justify the choice of these technologies and will explain some development tools that help in the integration and development of all this elements: specifically, NetBeans IDE and MySQL workbench have been used as helpful tools. After explaining all the elements involved in the development of the web application, the architecture and the code developed are explained through UML diagrams. Some implementation details related to security are also deeper explained through sequence diagrams. As the source code of the application is provided, an installation manual has been developed to run the project. In addition, as the platform is intended to be a beta that will be grown, some unimplemented ideas for future development are also exposed. Finally, some annexes with important files and scripts related to the initiation of the platform are attached. This project started through an existing tool that needed to be expanded. The main purpose of the project along its development has focused on setting the roots for a whole new platform that will replace the existing one. For this goal, it has been needed to make a deep inspection on the existing web technologies: a web server and a SQL database had to be chosen. Although the alternatives were a lot, Java technology for the server was finally selected because of the big community backwards, the easiness of modelling the language through UML diagrams and the fact of being free license software. Apache Tomcat is the open source server that can use Java Servlet and JSP technology. Related to the SQL database, MySQL Community Server is the most popular open-source SQL Server, with a big community after and quite a lot of tools to manage the server. JDBC is the driver needed to put in contact Java and MySQL. Once we chose the technologies that would be part of the platform, the development process started. After a detailed explanation of the development environment installation, we used UML use case diagrams to set the main tasks of the platform; UML class diagrams served to establish the existing relations between the classes generated; the architecture of the platform was represented through UML deployment diagrams; and Enhanced entity–relationship (EER) model were used to define the tables of the database and their relationships. Apart from the previous diagrams, some implementation issues were explained to make a better understanding of the developed code - UML sequence diagrams helped to explain this. Once the whole platform was properly defined and developed, the performance of the application has been shown: it has been proved that with the current state of the code, the platform covers the use cases that were set as the main target. Nevertheless, some requisites needed for the proper working of the platform have been specified. As the project is aimed to be grown, some ideas that could not be added to this beta have been explained in order not to be missed for future development. Finally, some annexes containing important configuration issues for the platform have been added after proper explanation, as well as an installation guide that will let a new developer get the project ready. In addition to this document some other files related to the project are provided: - Javadoc. The Javadoc containing the information of every Java class created is necessary for a better understanding of the source code. - database_model.mwb. This file contains the model of the database for MySQL Workbench. This model allows, among other things, generate the MySQL script for the creation of the tables. - ScheduleManager.war. The WAR file that will allow loading the developed application into Tomcat Server without using NetBeans. - ScheduleManager.zip. The source code exported from NetBeans project containing all Java packages, JSPs, Javascript files and CSS files that are part of the platform. - config.properties. The configuration file to properly get the names and credentials to use the database, also explained in Annex II. Example of config.properties file. - db_init_script.sql. The SQL query to initiate the database explained in Annex III. SQL statements for MySQL initialization. RESUMEN. Este proyecto tiene como punto de partida la necesidad de evolución de una herramienta web existente. El propósito principal del proyecto durante su desarrollo se ha centrado en establecer las bases de una completamente nueva plataforma que reemplazará a la existente. Para lograr esto, ha sido necesario realizar una profunda inspección en las tecnologías web existentes: un servidor web y una base de datos SQL debían ser elegidos. Aunque existen muchas alternativas, la tecnología Java ha resultado ser elegida debido a la gran comunidad de desarrolladores que tiene detrás, además de la facilidad que proporciona este lenguaje a la hora de modelarlo usando diagramas UML. Tampoco hay que olvidar que es una tecnología de uso libre de licencia. Apache Tomcat es el servidor de código libre que permite emplear Java Servlets y JSPs para hacer uso de la tecnología de Java. Respecto a la base de datos SQL, el servidor más popular de código libre es MySQL, y cuenta también con una gran comunidad detrás y buenas herramientas de modelado, creación y gestión de la bases de datos. JDBC es el driver que va a permitir comunicar las aplicaciones Java con MySQL. Tras elegir las tecnologías que formarían parte de esta nueva plataforma, el proceso de desarrollo tiene comienzo. Tras una extensa explicación de la instalación del entorno de desarrollo, se han usado diagramas de caso de UML para establecer cuáles son los objetivos principales de la plataforma; los diagramas de clases nos permiten realizar una organización del código java desarrollado de modo que sean fácilmente entendibles las relaciones entre las diferentes clases. La arquitectura de la plataforma queda definida a través de diagramas de despliegue. Por último, diagramas EER van a definir las relaciones entre las tablas creadas en la base de datos. Aparte de estos diagramas, algunos detalles de implementación se van a justificar para tener una mejor comprensión del código desarrollado. Diagramas de secuencia ayudarán en estas explicaciones. Una vez que toda la plataforma haya quedad debidamente definida y desarrollada, se va a realizar una demostración de la misma: se demostrará cómo los objetivos generales han sido alcanzados con el desarrollo actual del proyecto. No obstante, algunos requisitos han sido aclarados para que la plataforma trabaje adecuadamente. Como la intención del proyecto es crecer (no es una versión final), algunas ideas que se han podido llevar acabo han quedado descritas de manera que no se pierdan. Por último, algunos anexos que contienen información importante acerca de la plataforma se han añadido tras la correspondiente explicación de su utilidad, así como una guía de instalación que va a permitir a un nuevo desarrollador tener el proyecto preparado. Junto a este documento, ficheros conteniendo el proyecto desarrollado quedan adjuntos. Estos ficheros son: - Documentación Javadoc. Contiene la información de las clases Java que han sido creadas. - database_model.mwb. Este fichero contiene el modelo de la base de datos para MySQL Workbench. Esto permite, entre otras cosas, generar el script de iniciación de la base de datos para la creación de las tablas. - ScheduleManager.war. El fichero WAR que permite desplegar la plataforma en un servidor Apache Tomcat. - ScheduleManager.zip. El código fuente exportado directamente del proyecto de Netbeans. Contiene todos los paquetes de Java generados, ficheros JSPs, Javascript y CSS que forman parte de la plataforma. - config.properties. Ejemplo del fichero de configuración que permite obtener los nombres de la base de datos - db_init_script.sql. Las consultas SQL necesarias para la creación de la base de datos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been suggested that different pathways through the brain are followed depending on the type of information that is being processed. Although it is now known that there is a continuous exchange of information through both hemispheres, language is considered to be processed by the left hemisphere, where Broca?s and Wernicke?s areas are located. On the other hand, music is thought to be processed mainly by the right hemisphere. According to Sininger Y.S. & Cone- Wesson, B. (2004), there is a similar but contralateral specialization of the human ears; due to the fact that auditory pathways cross-over at the brainstem. A previous study showed an effect of musical imagery on spontaneous otoacoustic emissions (SOAEs) (Perez-Acosta and Ramos-Amezquita, 2006), providing evidence of an efferent influence from the auditory cortex on the basilar membrane. Based on these results, the present work is a comparative study between left and right ears of a population of eight musicians that presented SOAEs. A familiar musical tune was chosen, and the subjects were trained in the task of evoking it after having heard it. Samples of ear-canal signals were obtained and processed in order to extract frequency and amplitude data on the SOAEs. This procedure was carried out before, during and after the musical image creation task. Results were then analyzed to compare the difference between SOAE responses of left and right ears. A clear asymmetrical SOAEs response to musical imagery tasks between left and right ears was obtained. Significant changes of SOAE amplitude related to musical imagery tasks were only observed on the right ear of the subjects. These results may suggest a predominant left hemisphere activity related to a melodic image creation task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently personal data gathering in online markets is done on a far larger scale and much cheaper and faster than ever before. Within this scenario, a number of highly relevant companies for whom personal data is the key factor of production have emerged. However, up to now, the corresponding economic analysis has been restricted primarily to a qualitative perspective linked to privacy issues. Precisely, this paper seeks to shed light on the quantitative perspective, approximating the value of personal information for those companies that base their business model on this new type of asset. In the absence of any systematic research or methodology on the subject, an ad hoc procedure is developed in this paper. It starts with the examination of the accounts of a number of key players in online markets. This inspection first aims to determine whether the value of personal information databases is somehow reflected in the firms’ books, and second to define performance measures able to capture this value. After discussing the strengths and weaknesses of possible approaches, the method that performs best under several criteria (revenue per data record) is selected. From here, an estimation of the net present value of personal data is derived, as well as a slight digression into regional differences in the economic value of personal information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning Objects facilitate reuse leading to cost and time savings as well as to the enhancement of the quality of educational resources. However, teachers find it difficult to create or to find high quality Learning Objects, and the ones they find need to be customized. Teachers can overcome this problem using suitable authoring systems that enable them to create high quality Learning Objects with little effort. This paper presents an open source online e-Learning authoring tool called ViSH Editor together with four novel interactive Learning Objects that can be created with it: Flashcards, Virtual Tours, Enriched Videos and Interactive Presentations. All these Learning Objects are created as web applications, which can be accessed via mobile devices. Besides, they can be exported to SCORM including their metadata in IEEE LOM format. All of them are described in the paper including an example of each. This approach for creating Learning Objects was validated through two evaluations: a survey among authors and a formal quality evaluation of 209 Learning Objects created with the tool. The results show that ViSH Editor facilitates educators the creation of high quality Learning Objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthetic Aperture Radar’s (SAR) are systems designed in the early 50’s that are capable of obtaining images of the ground using electromagnetic signals. Thus, its activity is not interrupted by adverse meteorological conditions or during the night, as it occurs in optical systems. The name of the system comes from the creation of a synthetic aperture, larger than the real one, by moving the platform that carries the radar (typically a plane or a satellite). It provides the same resolution as a static radar equipped with a larger antenna. As it moves, the radar keeps emitting pulses every 1/PRF seconds —the PRF is the pulse repetition frequency—, whose echoes are stored and processed to obtain the image of the ground. To carry out this process, the algorithm needs to make the assumption that the targets in the illuminated scene are not moving. If that is the case, the algorithm is able to extract a focused image from the signal. However, if the targets are moving, they get unfocused and/or shifted from their position in the final image. There are applications in which it is especially useful to have information about moving targets (military, rescue tasks,studyoftheflowsofwater,surveillanceofmaritimeroutes...).Thisfeatureiscalled Ground Moving Target Indicator (GMTI). That is why the study and the development of techniques capable of detecting these targets and placing them correctly in the scene is convenient. In this document, some of the principal GMTI algorithms used in SAR systems are detailed. A simulator has been created to test the features of each implemented algorithm on a general situation with moving targets. Finally Monte Carlo tests have been performed, allowing us to extract conclusions and statistics of each algorithm.