860 resultados para Enterprise content management
Resumo:
Softeam has over 20 years of experience providing UML-based modelling solutions, such as its Modelio modelling tool, and its Constellation enterprise model management and collaboration environment. Due to the increasing number and size of the models used by Softeam’s clients, Softeam joined the MONDO FP7 EU research project, which worked on solutions for these scalability challenges and produced the Hawk model indexer among other results. This paper presents the technical details and several case studies on the integration of Hawk into Softeam’s toolset. The first case study measured the performance of Hawk’s Modelio support using varying amounts of memory for the Neo4j backend. In another case study, Hawk was integrated into Constellation to provide scalable global querying of model repositories. Finally, the combination of Hawk and the Epsilon Generation Language was compared against Modelio for document generation: for the largest model, Hawk was two orders of magnitude faster.
Resumo:
La perdurabilidad empresarial ha sido un tema recurrente en la literatura sobre dirección de empresas. A pesar de los avances, la liquidación de las empresas aumenta permanentemente. Buscando alternativas de mejora se estudia el caso de dos empresas cuadragenarias dedicadas a prestar servicios de consultoría en ingeniería eléctrica y civil que, en condiciones de crisis, implementaron acciones que les permitieron, no sólo mantenerse en el mercado sino también fortalecer su estructura financiera. Los resultados demostraron que un enfoque equilibrado caracterizado por la toma oportuna de decisiones y la definición e implementación de estrategias de negocio efectivas constituyen herramientas óptimas para asegurar un mayor grado de resiliencia empresarial.
Resumo:
Il patrimonio culturale è l’espressione della comunità a cui si riferisce e il digitale può essere un valido strumento per raccontare le storie relative ai beni culturali affinché siano, non solo studiati, ma anche recepiti nel loro significato più profondo da più pubblici. L’inserimento di testi manoscritti sul web utilizzando le tecnologie dei Linked Data facilitano la fruizione del testo da parte dell’utente non specializzato e la creazione di strumenti per la ricerca. La proposta di digitalizzazione della tesi ha come oggetto la vita di Federico da Montefeltro scritta da Vespasiano da Bisticci utilizzando i vocabolari schema.org, FOAF e Relationship per la marcatura del testo e i Content Management System per la pubblicazione dei dati. In questo modo sarà possibile avere un sito web in cui potrà essere curato anche l’aspetto grafico seguendo le regole della user experience e dell’information achitecture per valorizzare le figure del duca di Urbino e del cartolaio fiorentino.
Resumo:
A Digital Scholarly Edition is a conceptually and structurally sophisticated entity. Throughout the centuries, diverse methodologies have been employed to reconstruct a text transmitted through one or multiple sources, resulting in various edition types. With the advent of digital technology in philology, these practices have undergone a significant transformation, compelling scholars to reconsider their approach in light of the web. In the digital age, philologists are expected to possess (too) advanced technical skills to prepare interactive and enriched editions, even though, in most cases, only mechanical or documentary editions are published online. The Śivadharma Database is a web Content Management System (CMS) designed to facilitate the preparation, publication, and updating of Digital Scholarly Editions. By providing scholars with a user-friendly CRUD web application to reconstruct and annotate a text, they can prepare their textus with additional components such as apparatus, notes, translations, citations, and parallels. It is possible by leveraging an annotation system based on HTML and graph data structure. This choice is made because the text entity is multidimensional and multifaceted, even if its sequential presentation constrains it. In particular, editions of South Asian texts of the Śivadharma corpus, the case study of this research, contain a series of phenomena that are difficult to manage formally, such as overlapping hierarchies. Hence, it becomes necessary to establish the data structure best suited to represent this complexity. In Śivadharma Database, the textus is an HTML file readily displayable. Textual fragments, annotated via an interface without requiring philologists to write code and saved in the backend, form the atomic unit of multiple relationships organised in a graph database. This approach enables the formal representation of complex and overlapping textual phenomena, allowing for good annotation expressiveness with minimal effort to learn the relevant technologies during the editing workflow.
Resumo:
The University of Queensland, Australia has developed Fez, a world-leading user-interface and management system for Fedora-based institutional repositories, which bridges the gap between a repository and users. Christiaan Kortekaas, Andrew Bennett and Keith Webster will review this open source software that gives institutions the power to create a comprehensive repository solution without the hassle..
Resumo:
The general objective of this work was to study the contribution of the ERP for the quality of the managerial accounting information, through the perception of managers of large sized Brazilian companies. The initial principle was that, presently, we live in an enterprise reality characterized by global and competitive worldwide scenery where the information about the enterprise performance and the evaluation of the intangible assets are necessary conditions for the survival, of the companies. The research of the exploratory type is based on a sample of 37 managers of large sized-Brazilian companies. The analysis of the data treated by means of the qualitative method showed that the great majority of the companies of the sample (86%) possess an ERP implanted. It also showed that this system is used in combination with other applicative software. The managers, in its majority, were also satisfied with the information generated in relation to the dimensions Time and Content. However, with regard to the qualitative nature of the information, the ERP made some analysis possible when the Balanced Scorecard was adopted, but information able to provide an estimate of the investments carried through in the intangible assets was not obtained. These results Suggest that in these companies ERP systems are not adequate to support strategic decisions.
Resumo:
OBJECTIVE: Various support measures useful for promoting joint change approaches to the improvement of both shiftworking arrangements and safety and health management systems were reviewed. A particular focus was placed on enterprise-level risk reduction measures linking working hours and management systems. METHODS: Voluntary industry-based guidelines on night and shift work for department stores and the chemical, automobile and electrical equipment industries were examined. Survey results that had led to the compilation of practicable measures to be included in these guidelines were also examined. The common support measures were then compared with ergonomic checkpoints for plant maintenance work involving irregular nightshifts. On the basis of this analysis, a new night and shift work checklist was designed. RESULTS: Both the guidelines and the plant maintenance work checkpoints were found to commonly cover multiple issues including work schedules and various job-related risks. This close link between shiftwork arrangements and risk management was important as shiftworkers in these industries considered teamwork and welfare services to be essential for managing risks associated with night and shift work. Four areas found suitable for participatory improvement by managers and workers were work schedules, ergonomic work tasks, work environment and training. The checklist designed to facilitate participatory change processes covered all these areas. CONCLUSIONS: The checklist developed to describe feasible workplace actions was suitable for integration with comprehensive safety and health management systems and offered valuable opportunities for improving working time arrangements and job content together.
Resumo:
Paper presented at the ECKM 2010 – 11th European Conference on Knowledge Management, 2-3 September, 2010, Famalicão, Portugal. URL: http://www.academic-conferences.org/eckm/eckm2010/eckm10-home.htm
Resumo:
The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.
Resumo:
Existing digital rights management (DRM) systems, initiatives like Creative Commons or research works as some digital rights ontologies provide limited support for content value chains modelling and management. This is becoming a critical issue as content markets start to profit from the possibilities of digital networks and the World Wide Web. The objective is to support the whole copyrighted content value chain across enterprise or business niches boundaries. Our proposal provides a framework that accommodates copyright law and a rich creation model in order to cope with all the creation life cycle stages. The dynamic aspects of value chains are modelled using a hybrid approach that combines ontology-based and rule-based mechanisms. The ontology implementation is based on Web Ontology Language and Description Logic (OWL-DL) reasoners, are directly used for license checking. On the other hand, for more complex aspects of the dynamics of content value chains, rule languages are the choice.
Resumo:
Tietojohtaminen on osoittautunut nykypäivänä organisaatioiden yhdeksi suurimmaksi haasteeksi. Haasteena ei vain ole se tiedon määrä mitä tulisi hallita, vaan pikemminkin tiedonhallinta toimii yritykselle kilpailuetuna globaalissa yritysmaailmassa. Tämän työn tavoitteena on tutkia yritysportaalin soveltuvuutta tiedonhallintaan globaalissa metsäteollisuusyrityksessä. Lisäksi tavoitteena on selvittää portaalin sovittamista kullekin käyttäjäryhmälle case yrityksessä. Työn teoriaosassa on käsitelty tiedonhallinnan monimuotoisuutta ja vaikeutta kuvata sitä yksiselitteisesti. Lisäksi käyttäjäryhmien ja käyttäjäprofiilien määrittämiseen vaikuttavia seikkoja on selvitetty tässä osassa. Empiirinen osa käsittelee case-yritystä ja sen suhdetta tiedonhallintaan sekä tämän kaltaisen tiedonhallinnan työvälineen käyttöön. Työstä saatujen tulosten perusteella voidaan todeta yritysportaalin soveltuvan hyvin tiedonhallintaan monimutkaisessakin yrityksessä. Portaali muuttaa yrityksen liiketoimintaprosesseja läpinäkyvämmiksi, kun bisneskriittistä tietoa tarjotaan yhdessä paikassa.
Resumo:
Käyttäjien tunnistaminen tietojärjestelmissä on ollut yksi tietoturvan kulmakivistä vuosikymmenten ajan. Ajatus käyttäjätunnuksesta ja salasanasta on kaikkein kustannustehokkain ja käytetyin tapa säilyttää luottamus tietojärjestelmän ja käyttäjien välillä. Tietojärjestelmien käyttöönoton alkuaikoina, jolloin yrityksissä oli vain muutamia tietojärjestelmiä ja niitä käyttivät vain pieni ryhmä käyttäjiä, tämä toimintamalli osoittautui toimivaksi. Vuosien mittaan järjestelmien määrä kasvoi ja sen mukana kasvoi salasanojen määrä ja monimuotoisuus. Kukaan ei osannut ennustaa, kuinka paljon salasanoihin liittyviä ongelmia käyttäjät kohtaisivat ja kuinka paljon ne tulisivat ruuhkauttamaan yritysten käyttäjätukea ja minkälaisia tietoturvariskejä salasanat tulisivat aiheuttamaan suurissa yrityksissä. Tässä diplomityössä tarkastelemme salasanojen aiheuttamia ongelmia suuressa, globaalissa yrityksessä. Ongelmia tarkastellaan neljästä eri näkökulmasta; ihmiset, teknologia, tietoturva ja liiketoiminta. Ongelmat osoitetaan esittelemällä tulokset yrityksen työntekijöille tehdystä kyselystä, joka toteutettiin osana tätä diplomityötä. Ratkaisu näihin ongelmiin esitellään keskitetyn salasanojenhallintajärjestelmän muodossa. Järjestelmän eri ominaisuuksia arvioidaan ja kokeilu -tyyppinen toteutus rakennetaan osoittamaan tällaisen järjestelmän toiminnallisuus.
Resumo:
Many companies today struggle with problems they face around sales lead management. They are suffering from inconsistent quality of leads, they miss clear sales opportunities and even cannot handle well their internal marketing lists. Meanwhile customers are better and better equipped with means to easily initiate contact via internet, via call centers etc. Investing in lead generation activities that are built on a bad process is not a good idea. Better than asking how to get more leads, companies should ask how to get better quality leads and invest in improving lead management. This study looks sales lead management as a multi step process where a company generates leads in controlled environment, qualifies them and hands over to the sales cycle. As a final step, organization needs to analyze the incomes and successes of different lead sources. Most often in sales lead management a process improvement requires setting up additional controls to enable proper tracking of all leads. A sales lead management process model for the case company is built based on the findings. Implementing the new model involves changes and improvements in some key areas of current process. Starting from the very beginning, these include redefining a bit the lead definition and revising the criteria set for qualified lead. There are some improvements to be done in the system side to enable the proposed model. Lastly a setting for responsible roles is presented.
Resumo:
In a networked business environment the visibility requirements towards the supply operations and customer interface has become tighter. In order to meet those requirements the master data of case company is seen as an enabler. However the current state of master data and its quality are not seen good enough to meet those requirements. In this thesis the target of research was to develop a process for managing master data quality as a continuous process and find solutions to cleanse the current customer and supplier data to meet the quality requirements defined in that process. Based on the theory of Master Data Management and data cleansing, small amount of master data was analyzed and cleansed using one commercial data cleansing solution available on the market. This was conducted in cooperation with the vendor as a proof of concept. In the proof of concept the cleansing solution’s applicability to improve the quality of current master data was proved. Based on those findings and the theory of data management the recommendations and proposals for improving the quality of data were given. In the results was also discovered that the biggest reasons for poor data quality is the lack of data governance in the company, and the current master data solutions and its restrictions.