981 resultados para Trophic web structure


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An effective K-12 science education is essential to succeed in future phases of the curriculum and the e-Infrastructures for education provide new opportunities to enhance it. This paper presents ViSH Viewer, an innovative web tool to consume educational content which aims to facilitate e-Science infrastructures access through a next generation learning object called "Virtual Excursion". Virtual Excursions provide a new way to explore science in class by taking advantage of e-Infrastructure resources and their integration with other educational contents, resulting in the creation of a reusable, interoperable and granular learning object. In order to better understand how this tool can allow teachers and students a joyful exploration of e-Science, we also present three Virtual Excursion examples. Details about the design, development and the tool itself are explained in this paper as well as the concept, structure and metadata of the new learning object.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En la Web 2.0, donde usuarios finales sin grandes conocimientos de programación pueden interactuar y crear aplicaciones web utilizando componentes publicados en Internet, ofrecidos por una gran variedad de proveedores de servicio. La selección de estos componentes requiere de un análisis exhaustivo por parte de los usuarios sobre las propiedades de estos componentes en referencia a su calidad. En este proyecto, se presentan dos modelos de calidad según una estructura jerárquica, uno para componentes web como elementos autónomos y otro para componentes utilizados en aplicaciones de mashup, basado en un análisis de la emergente Web 2.0. Además, se define una herramienta de medición y anotación de calidad para los distintos niveles de los modelos. Estas herramientas pretenden ser un útil instrumento para los desarrolladores y usuarios de componentes y mashups.---ABSTRACT---In the Web 2.0, where end users without a technical programming background can interact and develop web applications leveraging web components published on the Internet, offered by a great diversity of service providers. This component selection requires an exhaustive analysis by these end users based on the requirements of these components related to their quality. In this work, two quality models are presented according to a hierarchical structure, one for web components as independent elements and another one for web components as parts of web mashup applications, based on an analysis of the emerging Web 2.0. In addition, a measuring and write down quality framework is defined to weigh the quality of all the levels of the models. These tools intend to provide a useful instrument to components and mashup developers and end users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este trabajo de fin de grado se llevará a cabo la elaboración de una aplicación web de gestión de gastos personales desde sus inicios, hasta su completo funcionamiento. Estas aplicaciones poseen un crecimiento emergente en el mercado, lo cual implica que la competencia entre ellas es muy elevada. Por ello el diseño de la aplicación que se va a desarrollar en este trabajo ha sido delicadamente cuidado. Se trata de un proceso minucioso el cual aportará a cada una de las partes de las que va a constar la aplicación características únicas que se plasmaran en funcionalidades para el usuario, como son: añadir sus propios gastos e ingresos mensuales, confeccionar gráficos de sus principales gastos, obtención de consejos de una fuente externa, etc… Estas funcionalidades de carácter único junto con otras más generalistas, como son el diseño gráfico en una amplia gama de colores, harán su manejo más fácil e intuitivo. Hay que destacar que para optimizar su uso, la aplicación tendrá la característica de ser responsive, es decir, será capaz de modificar su interfaz según el tamaño de la pantalla del dispositivo desde el que se acceda. Para su desarrollo, se va a utilizar una de las tecnologías más novedosas del mercado y siendo una de las más revolucionarias del momento, MEAN.JS. Con esta innovadora tecnología se creará la aplicación de gestión económica de gastos personales. Gracias al carácter innovador de aplicar esta tecnología novedosa, los retos que plantea este proyecto son muy variados, desde cómo estructurar las carpetas del proyecto y toda la parte de backend hasta como realizar el diseño de la parte de frontend. Además una vez finalizado su desarrollo y puesta en marcha se analizaran posibles mejoras para poder perfeccionarla en su totalidad. ABSTRACT In this final degree project will take out the development of a web application from its inception, until its full performance management. These applications have an emerging market growth, implying that competition between them is very high. Therefore the design of the application that will be developed in this work has been delicately care. It's a painstaking process which will provide each of the parties which will contain the application unique features that were translated into functionality for the user, such as: add their own expenses and monthly income, make graphs of your major expenses, obtaining advice from an external source, etc... These features of unique character together with other more general, such as graphic design in a wide range of colors, will make more easy and intuitive handling. It should be noted that to optimize its use, the application will have the characteristic of being responsive, will be able to modify your interface according to the size of the screen of the device from which are accessed. For its development, it is to use one of the newest technologies on the market and being one of the most revolutionary moment, MEAN. JS. The economic management of personal expenses application will be created with this innovative technology. Thanks to the innovative nature of applying this new technology, the challenges posed by this project are varied, from how to structure the folders of the project and all the backend part up to how to perform the part of frontend design. In addition once finished its development and commissioning possible improvements will analyze to be able to perfect it in its entirety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The database reported here is derived using the Combinatorial Extension (CE) algorithm which compares pairs of protein polypeptide chains and provides a list of structurally similar proteins along with their structure alignments. Using CE, structurestructure alignments can provide insights into biological function. When a protein of known function is shown to be structurally similar to a protein of unknown function, a relationship might be inferred; a relationship not necessarily detectable from sequence comparison alone. Establishing structurestructure relationships in this way is of great importance as we enter an era of structural genomics where there is a likelihood of an increasing number of structures with unknown functions being determined. Thus the CE database is an example of a useful tool in the annotation of protein structures of unknown function. Comparisons can be performed on the complete PDB or on a structurally representative subset of proteins. The source protein(s) can be from the PDB (updated monthly) or uploaded by the user. CE provides sequence alignments resulting from structural alignments and Cartesian coordinates for the aligned structures, which may be analyzed using the supplied Compare3D Java applet, or downloaded for further local analysis. Searches can be run from the CE web site, http://cl.sdsc.edu/ce.html, or the database and software downloaded from the site for local use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The RESID Database is a comprehensive collection of annotations and structures for protein post-translational modifications including N-terminal, C-terminal and peptide chain cross-link modifications. The RESID Database includes systematic and frequently observed alternate names, Chemical Abstracts Service registry numbers, atomic formulas and weights, enzyme activities, taxonomic range, keywords, literature citations with database cross-references, structural diagrams and molecular models. The NRL-3D Sequence–Structure Database is derived from the three-dimensional structure of proteins deposited with the Research Collaboratory for Structural Bioinformatics Protein Data Bank. The NRL-3D Database includes standardized and frequently observed alternate names, sources, keywords, literature citations, experimental conditions and searchable sequences from model coordinates. These databases are freely accessible through the National Cancer Institute–Frederick Advanced Biomedical Computing Center at these web sites: http://www.ncifcrf.gov/RESID, http://www.ncifcrf.gov/ NRL-3D; or at these National Biomedical Research Foundation Protein Information Resource web sites: http://pir.georgetown.edu/pirwww/dbinfo/resid.html, http://pir.georgetown.edu/pirwww/dbinfo/nrl3d.html

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the number of protein folds is quite limited, a mode of analysis that will be increasingly common in the future, especially with the advent of structural genomics, is to survey and re-survey the finite parts list of folds from an expanding number of perspectives. We have developed a new resource, called PartsList, that lets one dynamically perform these comparative fold surveys. It is available on the web at http://bioinfo.mbb.yale.edu/partslist and http://www.partslist.org. The system is based on the existing fold classifications and functions as a form of companion annotation for them, providing ‘global views’ of many already completed fold surveys. The central idea in the system is that of comparison through ranking; PartsList will rank the approximately 420 folds based on more than 180 attributes. These include: (i) occurrence in a number of completely sequenced genomes (e.g. it will show the most common folds in the worm versus yeast); (ii) occurrence in the structure databank (e.g. most common folds in the PDB); (iii) both absolute and relative gene expression information (e.g. most changing folds in expression over the cell cycle); (iv) protein–protein interactions, based on experimental data in yeast and comprehensive PDB surveys (e.g. most interacting fold); (v) sensitivity to inserted transposons; (vi) the number of functions associated with the fold (e.g. most multi-functional folds); (vii) amino acid composition (e.g. most Cys-rich folds); (viii) protein motions (e.g. most mobile folds); and (ix) the level of similarity based on a comprehensive set of structural alignments (e.g. most structurally variable folds). The integration of whole-genome expression and protein–protein interaction data with structural information is a particularly novel feature of our system. We provide three ways of visualizing the rankings: a profiler emphasizing the progression of high and low ranks across many pre-selected attributes, a dynamic comparer for custom comparisons and a numerical rankings correlator. These allow one to directly compare very different attributes of a fold (e.g. expression level, genome occurrence and maximum motion) in the uniform numerical format of ranks. This uniform framework, in turn, highlights the way that the frequency of many of the attributes falls off with approximate power-law behavior (i.e. according to V–b, for attribute value V and constant exponent b), with a few folds having large values and most having small values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have shown the utility of delta(15)N to model trophic structure and contaminant bioaccumulation in aquatic food webs. However, cross-system comparisons in delta(15)N can be complicated by differences in delta(15)N at the base of the food chain. Such baseline variation in delta(15)N is difficult to resolve using plankton because of the large temporal variability in the delta(15)N of small organisms that have fast nitrogen turnover. Comparisons using large primary consumers, which have stable tissue isotopic signatures because of their slower nitrogen turnover, show that delta(15)N increases markedly with the human population density in the lake watershed. This shift in delta(15)N likely reflects the high delta(15)N of human sewage. Correcting for this baseline variation in delta(15)N, we report that, contrary to expectations based on previous food-web analysis, the food chains leading up to fish varied by about only one trophic level among the 40 lakes studied. Our results also suggest that the delta(15)N signatures of nitrogen at the base of the food chain will provide a useful tool in the assessment of anthropogenic nutrient inputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a strong, active granular sludge bed is necessary for optimal operation of upflow anaerobic sludge blanket reactors. The microbial and mechanical structure of the granules may have a strong influence on desirable properties such as growth rate, settling velocity and shear strength. Theories have been proposed for granule microbial structure based on the relative kinetics of substrate degradation, but contradict some observations from both modelling and microscopic studies. In this paper, the structures of four granule types were examined from full-scale UASB reactors, treating wastewater from a cannery, a slaughterhouse, and two breweries. Microbial structure was determined using fluorescence in situ hybridisation probing with 16S rRNA-directed oligonucleotide probes, and superficial structure and microbial density (volume occupied by cells and microbial debris) assessed using scanning electron microscopy (SEM), and transmission electron microscopy (TEM). The granules were also modelled using a distributed parameter biofilm model, with a previously published biochemical model structure, biofilm modelling approach, and model parameters. The model results reflected the trophic structures observed, indicating that the structures were possibly determined by kinetics. Of particular interest were results from simulations of the protein grown granules, which were predicted to have slow growth rates, low microbial density, and no trophic layers, the last two of which were reflected by microscopic observations. The primary cause of this structure, as assessed by modelling, was the particulate nature of the wastewater, and the slow rate of particulate hydrolysis, rather than the presence of proteins in the wastewater. Because solids hydrolysis was rate limiting, soluble substrate concentrations were very low (below Monod half saturation concentration), which caused low growth rates. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web wrapper extracts data from HTML document. The accuracy and quality of the information extracted by web wrapper relies on the structure of the HTML document. If an HTML document is changed, the web wrapper may or may not function correctly. This paper presents an Adjacency-Weight method to be used in the web wrapper extraction process or in a wrapper self-maintenance mechanism to validate web wrappers. The algorithm and data structures are illustrated by some intuitive examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This study seeks to provide valuable new insight into the timeliness of corporate internet reporting (TCIR) by a sample of Irish-listed companies. Design/methodology/approach – The authors apply an updated version of Abdelsalam et al. TCIR index to assess the timeliness of corporate internet reporting. The index encompasses 13 criteria that are used to measure the TCIR for a sample of Irish-listed companies. In addition, the authors assess the timeliness of posting companies’ annual and interim reports to their web sites. Furthermore, the study examines the influence of board independence and ownership structure on the TCIR behaviour. Board composition is measured by the percentage of independent directors, chairman’s dual role and average tenure of directors. Ownership structure is represented by managerial ownership and blockholder ownership. Findings – It is found that Irish-listed companies, on average, satisfy only 46 per cent of the timeliness criteria assessed by the timeliness index. After controlling for size, audit fees and firm performance, evidence that TCIR is positively associated with board of director’s independence and chief executive officer (CEO) ownership is provided. Furthermore, it is found that large companies are faster in posting their annual reports to their web sites. The findings suggest that board composition and ownership structure influence a firm’s TCIR behaviour, presumably in response to the information asymmetry between management and investors and the resulting agency costs. Practical implications – The findings highlight the need for improvement in TCIR by Irish-listed companies in many areas, especially in regard to the regular updates of information provided on their web sites. Originality/value – This study represents one of the first comprehensive examinations of the important dimension of the TCIR in Irish-listed companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern procurement is being shifted from paper-based, people-intensive buying systems toward electronic-based purchase procedures that rely on Internet communications and Web-enhanced buying tools. Develops a typology of e-commerce tools that have come to characterize cutting-edge industrial procurement. E-commerce aspects of purchasing are organized into communication and transaction tools that encompass both internal and external buying activities. Further, a model of the impact of e-commerce on the structure and processes of an organization's buying center is developed. The impact of the changing buying center on procurement outcomes in terms of efficiency and effectiveness is also analyzed. Finally, implications for business-to-business marketers and researchers are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical knowledge structures are frequently used within clinical decision support systems as part of the model for generating intelligent advice. The nodes in the hierarchy inevitably have varying influence on the decisionmaking processes, which needs to be reflected by parameters. If the model has been elicited from human experts, it is not feasible to ask them to estimate the parameters because there will be so many in even moderately-sized structures. This paper describes how the parameters could be obtained from data instead, using only a small number of cases. The original method [1] is applied to a particular web-based clinical decision support system called GRiST, which uses its hierarchical knowledge to quantify the risks associated with mental-health problems. The knowledge was elicited from multidisciplinary mental-health practitioners but the tree has several thousand nodes, all requiring an estimation of their relative influence on the assessment process. The method described in the paper shows how they can be obtained from about 200 cases instead. It greatly reduces the experts’ elicitation tasks and has the potential for being generalised to similar knowledge-engineering domains where relative weightings of node siblings are part of the parameter space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – This study seeks to provide valuable new insight into the timeliness of corporate internet reporting (TCIR) by a sample of Irish-listed companies. Design/methodology/approach – The authors apply an updated version of Abdelsalam et al. TCIR index to assess the timeliness of corporate internet reporting. The index encompasses 13 criteria that are used to measure the TCIR for a sample of Irish-listed companies. In addition, the authors assess the timeliness of posting companies’ annual and interim reports to their web sites. Furthermore, the study examines the influence of board independence and ownership structure on the TCIR behaviour. Board composition is measured by the percentage of independent directors, chairman’s dual role and average tenure of directors. Ownership structure is represented by managerial ownership and blockholder ownership. Findings – It is found that Irish-listed companies, on average, satisfy only 46 per cent of the timeliness criteria assessed by the timeliness index. After controlling for size, audit fees and firm performance, evidence that TCIR is positively associated with board of director’s independence and chief executive officer (CEO) ownership is provided. Furthermore, it is found that large companies are faster in posting their annual reports to their web sites. The findings suggest that board composition and ownership structure influence a firm’s TCIR behaviour, presumably in response to the information asymmetry between management and investors and the resulting agency costs. Practical implications – The findings highlight the need for improvement in TCIR by Irish-listed companies in many areas, especially in regard to the regular updates of information provided on their web sites. Originality/value – This study represents one of the first comprehensive examinations of the important dimension of the TCIR in Irish-listed companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.