783 resultados para open data value chain
Resumo:
The Product Service Systems, servitization, and Service Science literature continues to grow as organisations seek to protect and improve their competitive position. The potential of technology applications to deliver service delivery systems facilitated by the ability to make real time decisions based upon ‘in the field’ performance is also significant. Research identifies four key questions to be addressed. Namely: how far along the servitization continuum should the organisation go in a single strategic step? Does the organisation have the structure and infrastructure to support this transition? What level of condition monitoring should it employ? Is the product positioned correctly in the value chain to adopt condition monitoring technology? Strategy consists of three dimensions, namely content, context, and process. The literature relating to PSS, servitization, and strategy all discuss the concepts relative to content and context but none offer a process to deliver an aligned strategy to deliver a service delivery system enabled by condition based management. This paper presents a tested iterative strategy formulation methodology which is the result of a structured development programme.
Resumo:
This paper discusses demand and supply chain management and examines how artificial intelligence techniques and RFID technology can enhance the responsiveness of the logistics workflow. This proposed system is expected to have a significant impact on the performance of logistics networks by virtue of its capabilities to adapt unexpected supply and demand changes in the volatile marketplace with the unique feature of responsiveness with the advanced technology, Radio Frequency Identification (RFID). Recent studies have found that RFID and artificial intelligence techniques drive the development of total solution in logistics industry. Apart from tracking the movement of the goods, RFID is able to play an important role to reflect the inventory level of various distribution areas. In today’s globalized industrial environment, the physical logistics operations and the associated flow of information are the essential elements for companies to realize an efficient logistics workflow scenario. Basically, a flexible logistics workflow, which is characterized by its fast responsiveness in dealing with customer requirements through the integration of various value chain activities, is fundamental to leverage business performance of enterprises. The significance of this research is the demonstration of the synergy of using a combination of advanced technologies to form an integrated system that helps achieve lean and agile logistics workflow.
Resumo:
There is increasing evidence that children continue to experience attention deficit hyperactivity disorder (ADHD) symptoms into adult life. The two main treatments for ADHD are antidepressants and stimulants. Here, the effectiveness data relating to the use of antidepressants in adults with ADHD are reviewed. Four controlled and six open studies were identified. Although, there is only limited data currently available, antidepressants may offer an effective therapy for adult ADHD. Controlled trials have studied desipramine, atomoxetine and bupropion, with most evidence supporting the efficacy of desipramine. The initial data indicate that atomoxetine is less effective than desipramine. The efficacy of bupropion is unclear. Initial published open data suggest a response rate of 50-78% with venlafaxine. Controlled studies are required to confirm this efficacy. Most of the present data are short-term, therefore long-term effectiveness data are required.
Resumo:
This paper contributes to the recent ‘practice turn’ in management accounting literature in two ways: (1) by investigating the meshing and consequently the ‘situated functionality’ of accounting in various private equity (PE) practices, and (2) by experimenting with the application of Schatzki’s ‘site’ ontology. By identifying and describing the role and nature of accounting and associated calculative practices in different parts of the PE value chain, we note that the ‘situated functionality’ of accounting is ‘prefigured’ by its ‘dispersed’ nature. A particular contribution of experimenting with Schatzki’s ‘site’ ontology has been to identify theoretical concerns in relation to the meaning and role of the concept ‘general understandings’ and to clarify the definitional issues surrounding this concept. We also identify the close relationship between ‘general understandings’ and ‘teleoaffective structure’ and note their mutually constitutive nature.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014
Resumo:
In this paper, we first overview the French project on heritage called PATRIMA, launched in 2011 as one of the Projets d'investissement pour l'avenir, a French funding program meant to last for the next ten years. The overall purpose of the PATRIMA project is to promote and fund research on various aspects of heritage presentation and preservation. Such research being interdisciplinary, research groups in history, physics, chemistry, biology and computer science are involved in this project. The PATRIMA consortium involves research groups from universities and from the main museums or cultural heritage institutions in Paris and surroundings. More specifically, the main members of the consortium are the two universities of Cergy-Pontoise and Versailles Saint-Quentin and the following famous museums or cultural institutions: Musée du Louvre, Château de Versailles, Bibliothèque nationale de France, Musée du Quai Branly, Musée Rodin. In the second part of the paper, we focus on two projects funded by PATRIMA named EDOP and Parcours and dealing with data integration. The goal of the EDOP project is to provide users with a data space for the integration of heterogeneous information about heritage; Linked Open Data are considered for an effective access to the corresponding data sources. On the other hand, the Parcours project aims at building an ontology on the terminology about the techniques dealing with restoration and/or conservation. Such an ontology is meant to provide a common terminology to researchers using different databases and different vocabularies.
Resumo:
In the global Internet economy, e-business as a driving force to redefine business models and operational processes is posing new challenges for traditional organizational structures and information system (IS) architectures. These are showing promises of a renewed period of innovative thinking in e-business strategies with new enterprise paradigms and different Enterprise Resource Planning (ERP) systems. In this chapter, the authors consider and investigate how dynamic e-business strategies, as the next evolutionary generation of e-business, can be realized through newly diverse enterprise structures supported by ERP, ERPII and so-called "ERPIII" solutions relying on the virtual value chain concept. Exploratory inductive multi-case studies in manufacturing and printing industries have been conducted. Additionally, it proposes a conceptual framework to discuss the adoption and governance of ERP systems within the context of three enterprise forms for enabling dynamic and collaborative e-business strategies, and particularly demonstrate how an enterprise can dynamically migrate from its current position to the patterns it desires to occupy in the future - a migration that must and will include dynamic e-business as a core competency, but that also relies heavily on ERP-based backbone and other robust technological platform and applications.
Resumo:
This article investigates the attitudes to inter-firm co-operation in Hungary by analysing a special group of business networks: the business clusters. Following an overview of cluster policy, a wide range of selfproclaimed business clusters are identified. A small elite of these business networks evolves into successful, sustainable innovative business clusters. However, in the majority of cases, these consortia of interfirm co-operation are not based on a mutually satisfactory model, and as a consequence, many clusters do not survive in the longer term. The paper uses the concepts and models of social network theory in order to explain, why and under what circumstances inter-firm co-operation in clusters enhances the competitiveness of the network as a whole, or alternatively, under what circumstances the cluster remains dependent on Government subsidies. The empirical basis of the study is a thorough internet research about the Hungarian cluster movement; a questionnaire based expert survey among managers of clusters and member companies and a set of in-depth interviews among managers of self-proclaimed clusters. The last chapter analyises the applicability of social network theory in the analysis of business networks and a model involving the value chain is recommended.
Resumo:
The paper aims to identify actual media audiences of different mass- and non-mass media types through identifying those audience clusters consuming not different but differentiable media mixes. A major concern of the study is to highlight the transformation of mass media audiences when technology, digitalization and participation behaviors are able to reshape traditional audience forms and media diets, which may directly affect the traditional media value chain and in turn the thinking and decision making of media managers. Through such a kaleidoscope the authors examined media use and consumption patterns using an online self-reported questionnaire. They developed different media consumer clusters as well as media consumption mixes. Based on the results of the study the authors can state that internet use is today’s main base of media consumption, and as such it is becoming the real mass media, replacing television. However this “new” media has a completely different structure, being more fragmented with smaller audience reach. At the same time, television is keeping its audience. However, there are emerging segments self-reporting non- or light television viewing. This is how the question of the viewer-television relation among different television viewer clusters evolves. At the same time only gaming exhibited demographic differentiation of audiences based on gender.
Resumo:
Internet ha rivoluzionato il modo di comunicare degli individui. Siamo testimoni della nascita e dello sviluppo di un'era caratterizzata dalla disponibilità di informazione libera e accessibile a tutti. Negli ultimi anni grazie alla diffusione di smartphone, tablet e altre tipologie di dispositivi connessi, è cambiato il fulcro dell'innovazione spostandosi dalle persone agli oggetti. E' così che nasce il concetto di Internet of Things, termine usato per descrivere la rete di comunicazione creata tra i diversi dispositivi connessi ad Internet e capaci di interagire in autonomia. Gli ambiti applicativi dell'Internet of Things spaziano dalla domotica alla sanità, dall'environmental monitoring al concetto di smart cities e così via. L'obiettivo principale di tale disciplina è quello di migliorare la vita delle persone grazie a sistemi che siano in grado di interagire senza aver bisogno dell'intervento dell'essere umano. Proprio per la natura eterogenea della disciplina e in relazione ai diversi ambiti applicativi, nell'Internet of Things si può incorrere in problemi derivanti dalla presenza di tecnologie differenti o di modalità eterogenee di memorizzazione dei dati. A questo proposito viene introdotto il concetto di Internet of Things collaborativo, termine che indica l'obiettivo di realizzare applicazioni che possano garantire interoperabilità tra i diversi ecosistemi e tra le diverse fonti da cui l'Internet of Things attinge, sfruttando la presenza di piattaforme di pubblicazione di Open Data. L'obiettivo di questa tesi è stato quello di creare un sistema per l'aggregazione di dati da due piattaforme, ThingSpeak e Sparkfun, con lo scopo di unificarli in un unico database ed estrarre informazioni significative dai dati tramite due tecniche di Data Mining: il Dictionary Learning e l'Affinity Propagation. Vengono illustrate le due metodologie che rientrano rispettivamente tra le tecniche di classificazione e di clustering.
Resumo:
Questo lavoro di tesi si concentra sulle estensioni apportate a BEX (Bibliographic Explorer), una web app finalizzata alla navigazione di pubblicazioni scientifiche attraverso le loro citazioni. Il settore in cui si colloca è il Semantic Publishing, un nuovo ambito di ricerca derivato dall'applicazione delle tecnologie del Semantic Web allo Scholarly Publishing, che ha come scopo la pubblicazione di articoli accademici a cui vengono associati metadati semantici. BEX nasce all'interno del Semantic Lancet Project del Dipartimento di Informatica dell'Università di Bologna, il cui obiettivo è costruire un Linked Open Dataset di pubblicazioni accademiche, il Semantic Lancet Triplestore (SLT), e fornire strumenti per la navigazione ad alto livello e l'uso approfondito dei dati in esso contenuti. Gli scholarly Linked Open Data elaborati da BEX sono insiemi di triple RDF conformi alle ontologie SPAR. Originariamente BEX ha come backend il dataset SLT che contiene metadati relativi alle pubblicazioni del Journal Of Web Semantics di Elsevier. BEX offre viste avanzate tramite un'interfaccia interattiva e una buona user-experience. L'utente di BEX è principalmente il ricercatore universitario, che per compiere le sue attività quotidiane fa largo uso delle Digital Library (DL) e dei servizi che esse offrono. Dato il fermento dei ricercatori nel campo del Semantic Publishing e la veloce diffusione della pubblicazione di scholarly Linked Open Data è ragionevole pensare di ampliare e mantenere un progetto che possa provvedere al sense making di dati altrimenti interrogabili solo in modo diretto con queries SPARQL. Le principali integrazioni a BEX sono state fatte in termini di scalabilità e flessibilità: si è implementata la paginazione dei risultati di ricerca, l'indipendenza da SLT per poter gestire datasets diversi per struttura e volume, e la creazione di viste author centric tramite aggregazione di dati e comparazione tra autori.
Resumo:
La tesi descrive PARLEN, uno strumento che permette l'analisi di articoli, l'estrazione e il riconoscimento delle entità - ad esempio persone, istituzioni, città - e il collegamento delle stesse a risorse online. PARLEN è inoltre in grado di pubblicare i dati estratti in un dataset basato su principi e tecnologie del Semantic Web.
Resumo:
This study tested whether contract farming or farmers professional cooperatives (FPCs) improved the social benefit of pork production and income of breeding farmers in China. The main concern of this study is whether institutional arrangement like contract farming or FPCs actually improved the welfare of farmers as expected. To answer this question accurately, we estimated the differentiated market demand of pork products in order to quantify the benefit by transaction types. Our study finds that contract farming or FPCs improved the benefits of pork products, but farmer's income remained lower than that of traditional transaction types. This finding is new in terms of quantifying distribution of the economic values among sales outlets, agro-firms and farmers. It is more reliable because it explicitly captures impacts from both demand side and supply side by structural estimation. In practice, we need to keep it mind the bargaining power of small farmers will not improve instantly even when the contract farming or FPCs are introduced.
Resumo:
En la actualidad, muchos gobiernos están publicando (o tienen la intención de publicar en breve) miles de conjuntos de datos para que personas y organizaciones los puedan utilizar. Como consecuencia, la cantidad de aplicaciones basadas en Open Data está incrementándose. Sin embargo cada gobierno tiene sus propios procedimientos para publicar sus datos, y esto causa una variedad de formatos dado que no existe un estándar internacional para especificar los formatos de estos datos. El objetivo principal de este trabajo es un análisis comparativo de datos ambientales en bases de datos abiertas (Open Data) pertenecientes a distintos gobiernos. Debido a esta variedad de formatos, debemos construir un proceso de integración de datos que sea capaz de unir todos los tipos de formatos. El trabajo implica un pre-procesado, limpieza e integración de las diferentes fuentes de datos. Existen muchas aplicaciones desarrolladas para dar soporte en el proceso de integración por ejemplo Data Tamer, Data Wrangler como se explica en este documento. El problema con estas aplicaciones es que necesitan la interacción del usuario como parte fundamental del proceso de integración. En este trabajo tratamos de evitar la supervisión humana aprovechando las similitudes de los datasets procedentes de igual área que en nuestro caso se aplica al área de medioambiente. De esta forma los procesos pueden ser automatizados con una programación adecuada. Para conseguirlo, la idea principal de este trabajo es construir procesos ad hoc adaptados a las fuentes de cada gobierno para conseguir una integración automática. Concretamente este trabajo se enfoca en datos ambientales como lo son la temperatura, consumo de energía, calidad de aire, radiación solar, velocidad del viento, etc. Desde hace dos años el gobierno de Madrid está publicando sus datos relativos a indicadores ambientales en tiempo real. Del mismo modo, otros gobiernos han publicado conjuntos de datos Open Data relativos al medio ambiente (como Andalucía o Bilbao), pero todos estos datos tienen diferentes formatos. En este trabajo se presenta una solución capaz de integrar todas ellos que además permite al usuario visualizar y hacer análisis sobre los datos en tiempo real. Una vez que el proceso de integración está realizado, todos los datos de cada gobierno poseen el mismo formato y se pueden lanzar procesos de análisis de una manera más computacional. Este trabajo tiene tres partes fundamentales: 1. Estudio de los entornos Open Data y la literatura al respecto; 2. Desarrollo de un proceso de integración y 3. Desarrollo de una Interface Gráfica y Analítica. Aunque en una primera fase se implementaron los procesos de integración mediante Java y Oracle y la Interface Gráfica con Java (jsp), en una fase posterior se realizó toda la implementación con lenguaje R y la interface gráfica mediante sus librerías, principalmente con Shiny. El resultado es una aplicación que provee de un conjunto de Datos Ambientales Integrados en Tiempo Real respecto a dos gobiernos muy diferentes en España, disponible para cualquier desarrollador que desee construir sus propias aplicaciones.
Resumo:
We present an innovation value chain analysis for a representative sample of new technology based firms (NTBFs) in the UK. This involves determining which factors lead to the usage of different knowledge sources and the relationships that exist between those sources of knowledge; the effect that each knowledge source has on innovative activity; and how innovation outputs affect the performance of NTBFs. We find that internal (i.e. R&D) and external knowledge sources are complementary for NTBFs, and that supply chain linkages have both a direct and indirect effect on innovation. NTBFs' skill resources matter throughout the innovation value chain, being positively associated with external knowledge linkages and innovation success, and also having a direct effect on growth independent of the effect on innovation. ©2010 IEEE.