943 resultados para SIB Semantic Information Broker OSGI Semantic Web
Resumo:
When they look at Internet policy, EU policymakers seem mesmerised, if not bewitched, by the word ‘neutrality’. Originally confined to the infrastructure layer, today the neutrality rhetoric is being expanded to multi-sided platforms such as search engines and more generally online intermediaries. Policies for search neutrality and platform neutrality are invoked to pursue a variety of policy objectives, encompassing competition, consumer protection, privacy and media pluralism. This paper analyses this emerging debate and comes to a number of conclusions. First, mandating net neutrality at the infrastructure layer might have some merit, but it certainly would not make the Internet neutral. Second, since most of the objectives initially associated with network neutrality cannot be realistically achieved by such a rule, the case for network neutrality legislation would have to stand on different grounds. Third, the fact that the Internet is not neutral is mostly a good thing for end users, who benefit from intermediaries that provide them with a selection of the over-abundant information available on the Web. Fourth, search neutrality and platform neutrality are fundamentally flawed principles that contradict the economics of the Internet. Fifth, neutrality is a very poor and ineffective recipe for media pluralism, and as such should not be invoked as the basis of future media policy. All these conclusions have important consequences for the debate on the future EU policy for the Digital Single Market.
Resumo:
The role of gender differences in the consumption of goods and services is well established in many areas of consumer behaviour and computer use and yet there has been only limited research into such gender-based differences in the information search behaviour of Internet users. This paper reports the gender-based results of an exploratory study of consumer external information search of the web. The study investigated consumer characteristics, web search behaviour, and the post web search outcomes of purchase decision status and consumer judgements of search usefulness and satisfaction. Gender-based differences are reported in all three areas. Consideration of the results suggests they are issues which could inhibit the adoption of online purchasing by female web users. The implications of these results are discussed and a future research agenda proposed.
Resumo:
Purpose – This study seeks to provide valuable new insight into the timeliness of corporate internet reporting (TCIR) by a sample of Irish-listed companies. Design/methodology/approach – The authors apply an updated version of Abdelsalam et al. TCIR index to assess the timeliness of corporate internet reporting. The index encompasses 13 criteria that are used to measure the TCIR for a sample of Irish-listed companies. In addition, the authors assess the timeliness of posting companies’ annual and interim reports to their web sites. Furthermore, the study examines the influence of board independence and ownership structure on the TCIR behaviour. Board composition is measured by the percentage of independent directors, chairman’s dual role and average tenure of directors. Ownership structure is represented by managerial ownership and blockholder ownership. Findings – It is found that Irish-listed companies, on average, satisfy only 46 per cent of the timeliness criteria assessed by the timeliness index. After controlling for size, audit fees and firm performance, evidence that TCIR is positively associated with board of director’s independence and chief executive officer (CEO) ownership is provided. Furthermore, it is found that large companies are faster in posting their annual reports to their web sites. The findings suggest that board composition and ownership structure influence a firm’s TCIR behaviour, presumably in response to the information asymmetry between management and investors and the resulting agency costs. Practical implications – The findings highlight the need for improvement in TCIR by Irish-listed companies in many areas, especially in regard to the regular updates of information provided on their web sites. Originality/value – This study represents one of the first comprehensive examinations of the important dimension of the TCIR in Irish-listed companies.
Resumo:
This thesis investigates corporate financial disclosure practices on Web sites and their impact. This is done, first by examining the views of various Saudi user groups (institutional investors, financial analysts and private investors) on disclosure of financial reporting on the Internet and assessing differences, if any, in perceptions of the groups. Over 303 individuals from three groups responded to a questionnaire. Views were elicited regarding: users attitude to the Internet infrastructure in Saudi Arabia, users information sources about companies in Saudi Arabia, respondents perception about the advantages and disadvantages in Internet financial reporting (IFR), respondents attitude to the quality of IFR provided by Saudi public companies and the impact of IFR on users information needs. Overall, it was found professional groups (Institutional investors, financial analysts) hold similar views in relation to many issues, while the opinions of private investors differ considerably. Second, the thesis examines the use of the Internet for the disclosure of financial and investor-related information by Saudi public companies (113 companies) and look to identify reasons for the differences in the online disclosure practices of companies by testing the association between eight firm-specific factors and the level of online disclosure. The financial disclosure index (167 items) is used to measure public company disclosure in Saudi Arabia. The descriptive part of the study reveals that 95 (84%) of the Saudi public companies in the sample had a website and 51 (45%) had a financial information section of some description. Furthermore, none of the sample companies provided 100% of the 167 index items applicable to the company. Results of multivariate analysis show that firm size and stock market listing are significant explanatory variables for the amount of information disclosed on corporate Web sites. The thesis finds a significant and negative relationship between the proportion of institutional ownership of a companys shares and the level of IFR.
Resumo:
Purpose – This study seeks to provide valuable new insight into the timeliness of corporate internet reporting (TCIR) by a sample of Irish-listed companies. Design/methodology/approach – The authors apply an updated version of Abdelsalam et al. TCIR index to assess the timeliness of corporate internet reporting. The index encompasses 13 criteria that are used to measure the TCIR for a sample of Irish-listed companies. In addition, the authors assess the timeliness of posting companies’ annual and interim reports to their web sites. Furthermore, the study examines the influence of board independence and ownership structure on the TCIR behaviour. Board composition is measured by the percentage of independent directors, chairman’s dual role and average tenure of directors. Ownership structure is represented by managerial ownership and blockholder ownership. Findings – It is found that Irish-listed companies, on average, satisfy only 46 per cent of the timeliness criteria assessed by the timeliness index. After controlling for size, audit fees and firm performance, evidence that TCIR is positively associated with board of director’s independence and chief executive officer (CEO) ownership is provided. Furthermore, it is found that large companies are faster in posting their annual reports to their web sites. The findings suggest that board composition and ownership structure influence a firm’s TCIR behaviour, presumably in response to the information asymmetry between management and investors and the resulting agency costs. Practical implications – The findings highlight the need for improvement in TCIR by Irish-listed companies in many areas, especially in regard to the regular updates of information provided on their web sites. Originality/value – This study represents one of the first comprehensive examinations of the important dimension of the TCIR in Irish-listed companies.
Resumo:
This paper describes research findings on the roles that organizations can adopt in managing supply networks. Drawing on extensive empirical data, it is demonstrated that organizations may be said to be able to manage supply networks, provided a broad view of ‘managing’ is adopted. Applying role theory, supply network management interventions were clustered into sets of linked activities and goals that constituted supply network management roles. Six supply network management roles were identified – innovation facilitator, co-ordinator, supply policy maker and implementer, advisor, information broker and supply network structuring agent. The findings are positioned in the wider context of debates about the meaning of management, the contribution of role theory to our understanding of management, and whether inter-organizational networks can be managed.
Resumo:
This paper presents an argument that it is possible for an organisation to manage networks, but understanding this involves consideration of what is meant by "managing". Based on prior research and data from a major longitudinal action research study in the health sector, the paper describes six network management roles: network structuring agent; co-ordinator; advisor; information broker; relationship broker; innovation sponsor. The necessary "assets" for effective performance of these roles are identified, in particular those relating to team competence. The findings enrich and significantly develop previous work on network management roles and activities, and their influencing factors. It is concluded that, given the specific nature of the networks studied, further research is required to evaluate the generalisability of the findings, though initial indications are promising.
Resumo:
Доклад, поместен в сборника на Националната конференция "Образованието в информационното общество", Пловдив, май, 2010 г.
Resumo:
The Everglades Online Thesaurus is a structured vocabulary of concepts and terms relating to the south Florida environment. Designed as an information management tool for both researchers and metadata creators, the Thesaurus is intended to improve information retrieval across the many disparate information systems, databases, and web sites that provide Everglades-related information. The vocabulary provided by the Everglades Online Thesaurus expresses each relevant concept using a single ‘preferred term’, whereas in natural language many terms may exist to express that same concept. In this way, the Thesaurus offers the possibility of standardizing the terminology used to describe Everglades-related information — an important factor in predictable and successful resource discovery.
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. ^ In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.^
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.
Resumo:
Learning Management Systems (LMSs) have become a larger part of teaching and learning in the modern world. Therefore has Moodle, a free and open source e-learning tool surfaced and gained a lot of attraction and downloads. A purpose of this study has been to develop a new local plugin in Moodle with guidelines from Magnus Eriksson and Tsedey Terefe. A purpose for this project has also been to build a plugin which has the functions Date rollover and Individual date adjustment. Mid Sweden University (Miun) stated that WebCT/Blackboard was in use before Moodle and some other LMSs and the dissatisfaction with WebCT/Blackboard was rife, however some teachers liked it. Therefore WebCT/Blackboard was abandoned and Moodle was embraced. The methods of gaining information has generally been web based sources and three interviews, likewise called user tests. Programs and other aids that have been used include but are not limited to: Google Drive, LTI Provider, Moodle, Moodle documentation, Notepad++, PHP and XAMPP. The plugin has been implemented as a local plugin. The result has shown that the coded plugin, Date adjustment tools could be improved and that it was changed. In the plugin, support for old American English dates were added and the code for using the two functions “Date rollover” and “Individual date adjustment” were rewritten to not interfere with one another. A conclusion to draw from the result is that the plugin has been improved from Terefe’s implementation, although more work can be made with the plugin Date adjustment tools.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.