850 resultados para 350202 Business Information Systems (incl. Data Processing)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pelastuslaitosten liiketoimintatiedonhallinnalla, tietoperusteisuudella ja tietojohtamisella on tulevaisuudessa merkittävä rooli päätettäessä palveluista. Julkisen pelastustoimen kuntien liikelaitoksina ja eriytettyinä taseyksiköinä toimivien pelastuslaitosten haasteet tulevat olemaan jatkossa tehokkaiden ja vaikuttavien palveluiden strategisessa johtamisessa ja suunnittelussa. Näistä asioista päättäminen on kriittinen vaihe onnistumisen kannalta. Päätöksenteko eri tasoilla tarvitsee tuekseen toiminnasta ja palveluista kanavoitua analysoitua tietoa. Asiakastarpeesta lähtevä vaikuttavuus ja laatu korostuvat. Liiketoimintatiedonhallinta ja tietoperusteisuus haastavat pelastuslaitoksen johtamisjärjestelmän. Johtamisen kyvykkyys ja henkilöstön osaaminen ovat tietoperusteisuuden ja tiedonhallinnan keskiössä. Systemaattisen liiketoimintatiedonhallinnan ja tietoperusteisuuden erottaa perinteisestä virkamiehen tietojen hyväksikäytöstä käsitteen kokonaisvaltaisuus ja järjestelmällisyys kaikessa tiedollisessa toiminnassa. Tämä kattaa tietojärjestelmät, mittarit, prosessit, strategian suunnitelmat, asiakirjat, raportoinnin, kehittämisen ja tutkimuksen. Liiketoimin-tatiedonhallinta ja tietojohtaminen linkittävät kaiken toisiinsa muodostaen keskinäisriippuvaisen yhtenäisen järjestelmän ja kokonaisvaltaisen ymmärryksen. Tutkimukseni on laadullinen tutkimus jossa tiedon keruu ja analysointi on toteutettu toisiaan tukevilla tutkimusotteilla. Metodologia nojaa teorialähtöiseen systemaattiseen analyysiin, jossa on valikoituja osia sisällön analyysistä. Tutkimuksessa on käytetty aineisto- ja menetelmätriangulaatioita. Tutkimuksen aineisto on kerätty teemahaastatteluilla valittujen kohde pelastuslaitosten asiantuntijoilta palveluiden päätös- ja suunnittelutasolta, johtoryhmistä ja joh-tokunnista. Haastatteluja varten tutkija on tutustunut kohdepelastuslaitosten palveluita mää-rittävään tiedolliseen dokumentaatioon kuten palvelutasopäätöksiin ja riskianalyyseihin. Ai-neisto keruun kohteiksi valikoitui pääkaupunkiseudun alueen pelastuslaitokset: Helsingin kaupungin pelastuslaitos sekä Itä-, Keski- ja Länsi-Uudenmaan pelastuslaitokset. Tulosten mukaan pelastuslaitosten keskeiset liiketoimintatiedonhallinnan esteet muodostuvat johtamisen ongelmista, organisaation muutosvastarinnasta ja päätöksenteon tietoperusteen puutteesta. Nämä ilmenevät strategisen johtamisen puutteina, vaikuttavuuden mittaamisen sekä tiedon jalostamisen ongelmina. Keskeistä tiedollista yhdistävää ja linkittävää tekijää ei tunnisteta ja löydetä. Tiedollisessa liiketoimintatiedonhallinnan prosessityössä voisi olla tulos-ten mukaan mahdollisuuksia tämän tyhjiön täyttämiseen. Pelastuslaitoksille jää tulevaisuudessa valinta suunnasta johon ne haluavat edetä tiedonhal-linnan, tietojohtamisen ja tietoperusteisuuden kanssa. Tämä vaikuttaa kehitykseen ja tavoitteeseen keskeisistä palveluiden päätöksentekoa tukevista johtamis- ja tietojärjestelmistä, tietoa kokoavista ja luovista dokumenteista sekä organisaation joustavasta rakenteesta. Tietoprosessiin, tiedon prosessimaiseen johtamiseen ja systemaattiseen tiedonhallintaan meneminen vaikuttaa tutkimuksen tulosten mukaan lupaavalta mahdollisuudelta. Samalla se haastaa pelauslaitokset suureen kulttuuriseen muutokseen ja asettaa uusien vaikuttavuusmittareiden tuottaman tiedon ennakoivan hyväksynnän vaateen strategiselle suunnittelulle. Tämä vaatii pelastuslaitosten johdolta ja henkilöstöltä osaamista, yhteisymmärrystä, muutostarpeiden hyväksyntää sekä asiakkaan asettamista vaikuttavuuden keskiöön.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Product Data Management (PDM) systems have been utilized within companies since the 1980s. Mainly the PDM systems have been used by large companies. This thesis presents the premise that small and medium-sized companies can also benefit from utilizing the Product Data Management systems. Furthermore, the starting point for the thesis is that the existing PDM systems are either too expensive or do not properly respond to the requirements SMEs have. The aim of this study is to investigate what kinds of requirements and special features SMEs, operating in Finnish manufacturing industry, have towards Product Data Management. Additionally, the target is to create a conceptual model that could fulfill the specified requirements. The research has been carried out as a qualitative case study, in which the research data was collected from ten Finnish companies operating in manufacturing industry. The research data is formed by interviewing key personnel from the case companies. After this, the data formed from the interviews has been processed to comprise a generic set of information system requirements and the information system concept supporting it. The commercialization of the concept is studied in the thesis from the perspective of system development. The aim was to create a conceptual model, which would be economically feasible for both, a company utilizing the system and for a company developing it. For this reason, the thesis has sought ways to scale the system development effort for multiple simultaneous cases. The main methods found were to utilize platform-based thinking and a way to generalize the system requirements, or in other words abstracting the requirements of an information system. The results of the research highlight the special features Finnish manufacturing SMEs have towards PDM. The most significant of the special features is the usage of project model to manage the order-to-delivery –process. This differs significantly from the traditional concepts of Product Data Management presented in the literature. Furthermore, as a research result, this thesis presents a conceptual model of a PDM system, which would be viable for the case companies interviewed during the research. As a by-product, this research presents a synthesized model, found from the literature, to abstract information system requirements. In addition to this, the strategic importance and categorization of information systems within companies has been discussed from the perspective of information system customizations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This case study examines the impact of a computer information system as it was being implemented in one Ontario hospital. The attitudes of a cross section of the hospital staff acted as a barometer to measure their perceptions of the implementation process. With The Mississauga Hospital in the early stages of an extensive computer implementation project, the opportunity existed to identify staff attitudes about the computer system, overall knowledge and compare the findings with the literature. The goal of the study was to develop a greater base about the affective domain in the relationship between people and the computer system. Eight exploratory questions shaped the focus of the investigation. Data were collected from three sources: a survey questionnaire, focused interviews, and internal hospital documents. Both quantitative and qualitative data were analyzed. Instrumentation in the study consisted of a survey distributed at two points in time to randomly selected hospital employees who represented all staff levels.Other sources of data included hospital documents, and twenty-five focused interviews with staff who replied to both surveys. Leavitt's socio-technical system, with its four subsystems: task, structure, technology, and people was used to classify staff responses to the research questions. The study findings revealed that the majority of respondents felt positive about using the computer as part of their jobs. No apparent correlations were found between sex, age, or staff group and feelings about using the computer. Differences in attitudes, and attitude changes were found in potential relationship to the element of time. Another difference was found in staff group and perception of being involved in the decision making process. These findings and other evidence about the role of change agents in this change process help to emphasize that planning change is one thing, managing the transition is another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With this document, we provide a compilation of in-depth discussions on some of the most current security issues in distributed systems. The six contributions have been collected and presented at the 1st Kassel Student Workshop on Security in Distributed Systems (KaSWoSDS’08). We are pleased to present a collection of papers not only shedding light on the theoretical aspects of their topics, but also being accompanied with elaborate practical examples. In Chapter 1, Stephan Opfer discusses Viruses, one of the oldest threats to system security. For years there has been an arms race between virus producers and anti-virus software providers, with no end in sight. Stefan Triller demonstrates how malicious code can be injected in a target process using a buffer overflow in Chapter 2. Websites usually store their data and user information in data bases. Like buffer overflows, the possibilities of performing SQL injection attacks targeting such data bases are left open by unwary programmers. Stephan Scheuermann gives us a deeper insight into the mechanisms behind such attacks in Chapter 3. Cross-site scripting (XSS) is a method to insert malicious code into websites viewed by other users. Michael Blumenstein explains this issue in Chapter 4. Code can be injected in other websites via XSS attacks in order to spy out data of internet users, spoofing subsumes all methods that directly involve taking on a false identity. In Chapter 5, Till Amma shows us different ways how this can be done and how it is prevented. Last but not least, cryptographic methods are used to encode confidential data in a way that even if it got in the wrong hands, the culprits cannot decode it. Over the centuries, many different ciphers have been developed, applied, and finally broken. Ilhan Glogic sketches this history in Chapter 6.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’ objectiu principal d’ aquest projecte és dissenyar una aplicació o una base de dades que permeti: gestionar els diferents objectes que interaccionen en una empresa, gestionar els serveis o productes que s’ofereixen a la venda, gestionar la facturació de l’ empresa, gestionar les compres a empreses externes, controlar els estocs dels diferents articles i poder fer consultes sobre les dades i obtenir-ne llistats, per comprovar si l’empresa té beneficis o dèficit

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquest projecte consisteix en implementar l’aplicació PAPOM (Programa Assistit a la Planificació de procés i producció en Operacions de Mecanitzat) en una PIME. Aquest programa l’ha desenvolupat el grup de recerca de la UdG GREP i té com a principal objectiu ajudar a gestionar la planificació de processos de mecanitzat. La filosofia del programa és la de donar una solució personalitzada a cada PIME facilitant el desenvolupament d’aquesta. En aquest sentit, és on aquest treball pren una gran importància ja que s’ha treballat conjuntament amb una empresa del sector, Mecanitzats Privat, S.L., per tal d’ajustar el programa a la realitat, i determinar quins són els camps i paràmetres susceptibles de ser adaptats a cada empresa. Amb aquesta finalitat s’han determinat quins mòduls poden modificar-se sense afectar al funcionament intern del software, per tal de fer l’ús del programa més pràctic i àgil per a cada taller en concret. En aquest punt s’ha personalitzat el programa per a Mecanitzats Privat, S.L. i s’han marcat unes línies futures de treball per seguir fent el programa més adaptable, fent-ne de la personalització filosofia i valor del programa. A més a més, en aquesta relació entre el departament i l’empresa, a nivell de comercial i client s’han elaborat unes fitxes d’instal•lació. Aquestes pretenen ser una eina que ajudi a la presentació del PAPOM a les empreses a fi d’agilitzar el procés d’obtenció d’informació d’un petit sector de l’empresa per tal de realitzar una demostració ajustada a cada taller

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se basa en un análisis teórico de los sistemas de información como lo es el almacenaje de datos, cubos OLAP e inteligencia de negocios. Seguidamente, se hace un análisis de los sectores económicos de Colombia con un especial interés sobre el sector de alimentos, de esta manera conceptualizar la empresa sobre la cual este trabajo se enfocara. Se encontrará un análisis del caso de éxito Summerwood Corporation, el cual brindará una justificación para la propuesta final presentada a la empresa Dipsa Food, Pyme dedicada a la producción de alimentos no perecederos ubicada en la ciudad de Bogotá D.C –Colombia, la cual tiene gran interés en cuanto al desarrollo de nuevas tecnologías que brinden información fidedigna para la toma de decisiones

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las tecnologías de la información han empezado a ser un factor importante a tener en cuenta en cada uno de los procesos que se llevan a cabo en la cadena de suministro. Su implementación y correcto uso otorgan a las empresas ventajas que favorecen el desempeño operacional a lo largo de la cadena. El desarrollo y aplicación de software han contribuido a la integración de los diferentes miembros de la cadena, de tal forma que desde los proveedores hasta el cliente final, perciben beneficios en las variables de desempeño operacional y nivel de satisfacción respectivamente. Por otra parte es importante considerar que su implementación no siempre presenta resultados positivos, por el contrario dicho proceso de implementación puede verse afectado seriamente por barreras que impiden maximizar los beneficios que otorgan las TIC.