971 resultados para data management planning
Resumo:
Pelastuslaitosten liiketoimintatiedonhallinnalla, tietoperusteisuudella ja tietojohtamisella on tulevaisuudessa merkittävä rooli päätettäessä palveluista. Julkisen pelastustoimen kuntien liikelaitoksina ja eriytettyinä taseyksiköinä toimivien pelastuslaitosten haasteet tulevat olemaan jatkossa tehokkaiden ja vaikuttavien palveluiden strategisessa johtamisessa ja suunnittelussa. Näistä asioista päättäminen on kriittinen vaihe onnistumisen kannalta. Päätöksenteko eri tasoilla tarvitsee tuekseen toiminnasta ja palveluista kanavoitua analysoitua tietoa. Asiakastarpeesta lähtevä vaikuttavuus ja laatu korostuvat. Liiketoimintatiedonhallinta ja tietoperusteisuus haastavat pelastuslaitoksen johtamisjärjestelmän. Johtamisen kyvykkyys ja henkilöstön osaaminen ovat tietoperusteisuuden ja tiedonhallinnan keskiössä. Systemaattisen liiketoimintatiedonhallinnan ja tietoperusteisuuden erottaa perinteisestä virkamiehen tietojen hyväksikäytöstä käsitteen kokonaisvaltaisuus ja järjestelmällisyys kaikessa tiedollisessa toiminnassa. Tämä kattaa tietojärjestelmät, mittarit, prosessit, strategian suunnitelmat, asiakirjat, raportoinnin, kehittämisen ja tutkimuksen. Liiketoimin-tatiedonhallinta ja tietojohtaminen linkittävät kaiken toisiinsa muodostaen keskinäisriippuvaisen yhtenäisen järjestelmän ja kokonaisvaltaisen ymmärryksen. Tutkimukseni on laadullinen tutkimus jossa tiedon keruu ja analysointi on toteutettu toisiaan tukevilla tutkimusotteilla. Metodologia nojaa teorialähtöiseen systemaattiseen analyysiin, jossa on valikoituja osia sisällön analyysistä. Tutkimuksessa on käytetty aineisto- ja menetelmätriangulaatioita. Tutkimuksen aineisto on kerätty teemahaastatteluilla valittujen kohde pelastuslaitosten asiantuntijoilta palveluiden päätös- ja suunnittelutasolta, johtoryhmistä ja joh-tokunnista. Haastatteluja varten tutkija on tutustunut kohdepelastuslaitosten palveluita mää-rittävään tiedolliseen dokumentaatioon kuten palvelutasopäätöksiin ja riskianalyyseihin. Ai-neisto keruun kohteiksi valikoitui pääkaupunkiseudun alueen pelastuslaitokset: Helsingin kaupungin pelastuslaitos sekä Itä-, Keski- ja Länsi-Uudenmaan pelastuslaitokset. Tulosten mukaan pelastuslaitosten keskeiset liiketoimintatiedonhallinnan esteet muodostuvat johtamisen ongelmista, organisaation muutosvastarinnasta ja päätöksenteon tietoperusteen puutteesta. Nämä ilmenevät strategisen johtamisen puutteina, vaikuttavuuden mittaamisen sekä tiedon jalostamisen ongelmina. Keskeistä tiedollista yhdistävää ja linkittävää tekijää ei tunnisteta ja löydetä. Tiedollisessa liiketoimintatiedonhallinnan prosessityössä voisi olla tulos-ten mukaan mahdollisuuksia tämän tyhjiön täyttämiseen. Pelastuslaitoksille jää tulevaisuudessa valinta suunnasta johon ne haluavat edetä tiedonhal-linnan, tietojohtamisen ja tietoperusteisuuden kanssa. Tämä vaikuttaa kehitykseen ja tavoitteeseen keskeisistä palveluiden päätöksentekoa tukevista johtamis- ja tietojärjestelmistä, tietoa kokoavista ja luovista dokumenteista sekä organisaation joustavasta rakenteesta. Tietoprosessiin, tiedon prosessimaiseen johtamiseen ja systemaattiseen tiedonhallintaan meneminen vaikuttaa tutkimuksen tulosten mukaan lupaavalta mahdollisuudelta. Samalla se haastaa pelauslaitokset suureen kulttuuriseen muutokseen ja asettaa uusien vaikuttavuusmittareiden tuottaman tiedon ennakoivan hyväksynnän vaateen strategiselle suunnittelulle. Tämä vaatii pelastuslaitosten johdolta ja henkilöstöltä osaamista, yhteisymmärrystä, muutostarpeiden hyväksyntää sekä asiakkaan asettamista vaikuttavuuden keskiöön.
Resumo:
In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.
Resumo:
This research is looking to find out what benefits employees expect the organization of data governance gains for an organization and how it benefits implementing automated marketing capabilities. Quality and usability of the data are crucial for organizations to meet various business needs. Organizations have more data and technology available what can be utilized for example in automated marketing. Data governance addresses the organization of decision rights and accountabilities for the management of an organization’s data assets. With automated marketing it is meant sending a right message, to a right person, at a right time, automatically. The research is a single case study conducted in Finnish ICT-company. The case company was starting to organize data governance and implementing automated marketing capabilities at the time of the research. Empirical material is interviews of the employees of the case company. Content analysis is used to interpret the interviews in order to find the answers to the research questions. Theoretical framework of the research is derived from the morphology of data governance. Findings of the research indicate that the employees expect the organization of data governance among others to improve customer experience, to improve sales, to provide abilities to identify individual customer’s life-situation, ensure that the handling of the data is according to the regulations and improve operational efficiency. The organization of data governance is expected to solve problems in customer data quality that are currently hindering implementation of automated marketing capabilities.
Resumo:
LiDAR is an advanced remote sensing technology with many applications, including forest inventory. The most common type is ALS (airborne laser scanning). The method is successfully utilized in many developed markets, where it is replacing traditional forest inventory methods. However, it is innovative for Russian market, where traditional field inventory dominates. ArboLiDAR is a forest inventory solution that engages LiDAR, color infrared imagery, GPS ground control plots and field sample plots, developed by Arbonaut Ltd. This study is an industrial market research for LiDAR technology in Russia focused on customer needs. Russian forestry market is very attractive, because of large growing stock volumes. It underwent drastic changes in 2006, but it is still in transitional stage. There are several types of forest inventory, both with public and private funding. Private forestry enterprises basically need forest inventory in two cases – while making coupe demarcation before timber harvesting and as a part of forest management planning, that is supposed to be done every ten years on the whole leased territory. The study covered 14 companies in total that include private forestry companies with timber harvesting activities, private forest inventory providers, state subordinate companies and forestry software developer. The research strategy is multiple case studies with semi-structured interviews as the main data collection technique. The study focuses on North-West Russia, as it is the most developed Russian region in forestry. The research applies the Voice of the Customer (VOC) concept to elicit customer needs of Russian forestry actors and discovers how these needs are met. It studies forest inventory methods currently applied in Russia and proposes the model of method comparison, based on Multi-criteria decision making (MCDM) approach, mainly on Analytical Hierarchy Process (AHP). Required product attributes are classified in accordance with Kano model. The answer about suitability of LiDAR technology is ambiguous, since many details should be taken into account.
Resumo:
Das Ziel der Dissertation war die Untersuchung des Computereinsatzes zur Lern- und Betreuungsunterstützung beim selbstgesteuerten Lernen in der Weiterbildung. In einem bisher konventionell durchgeführten Selbstlernkurs eines berufsbegleitenden Studiengangs, der an das Datenmanagement der Bürodatenverarbeitung heranführt, wurden die Kursunterlagen digitalisiert, die Betreuung auf eine online-basierte Lernbegleitung umgestellt und ein auf die neuen Lernmedien abgestimmtes Lernkonzept entwickelt. Dieses neue Lernkonzept wurde hinsichtlich der Motivation und der Akzeptanz von digitalen Lernmedien evaluiert. Die Evaluation bestand aus zwei Teilen: 1. eine formative, den Entwicklungsprozess begleitende Evaluation zur Optimierung der entwickelten Lernsoftware und des eingeführten Lernkonzeptes, 2. eine sowohl qualitative wie quantitative summative Evaluation der Entwicklungen. Ein zentraler Aspekt der Untersuchung war die freie Wahl der Lernmedien (multimediale Lernsoftware oder konventionelles Begleitbuch) und der Kommunikationsmedien (online-basierte Lernplattform oder die bisher genutzten Kommunikationskanäle: E-Mail, Telefon und Präsenztreffen). Diese Zweigleisigkeit erlaubte eine differenzierte Gegenüberstellung von konventionellen und innovativen Lernarrangements. Die Verbindung von qualitativen und quantitativen Vorgehensweisen, auf Grund derer die subjektiven Einstellungen der Probanden in das Zentrum der Betrachtung rückten, ließen einen Blickwinkel auf den Nutzen und die Wirkung der Neuen Medien in Lernprozessen zu, der es erlaubte einige in der Literatur als gängig angesehene Interpretationen in Frage zu stellen und neu zu diskutieren. So konnten durch eine Kategorisierung des Teilnehmerverhaltens nach online-typisch und nicht online-typisch die Ursache-Wirkungs-Beziehungen der in vielen Untersuchungen angeführten Störungen in Online-Seminaren verdeutlicht werden. In den untersuchten Kursen zeigte sich beispielsweise keine Abhängigkeit der Drop-out-Quote von den Lern- und Betreuungsformen und dass diese Quote mit dem neuen Lernkonzept nur geringfügig beeinflusst werden konnte. Die freie Wahl der Lernmedien führte zu einer gezielten Nutzung der multimedialen Lernsoftware, wodurch die Akzeptanz dieses Lernmedium stieg. Dagegen war die Akzeptanz der Lernenden gegenüber der Lernbegleitung mittels einer Online-Lernplattform von hoch bis sehr niedrig breit gestreut. Unabhängig davon reichte in allen Kursdurchgängen die Online-Betreuung nicht aus, so dass Präsenztreffen erbeten wurde. Hinsichtlich der Motivation war die Wirkung der digitalen Medien niedriger als erwartet. Insgesamt bieten die Ergebnisse Empfehlungen für die Planung und Durchführung von computerunterstützten, online-begleiteten Kursen.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
This paper describes a case study of an electronic data management system developed in-house by the Facilities Management Directorate (FMD) of an educational institution in the UK. The FMD Maintenance and Business Services department is responsible for the maintenance of the built-estate owned by the university. The department needs to have a clear definition of the type of work undertaken and the administration that enables any maintenance work to be carried out. These include the management of resources, budget, cash flow and workflow of reactive, preventative and planned maintenance of the campus. In order to be more efficient in supporting the business process, the FMD had decided to move from a paper-based information system to an electronic system, WREN, to support the business process of the FMD. Some of the main advantages of WREN are that it is tailor-made to fit the purpose of the users; it is cost effective when it comes to modifications on the system; and the database can also be used as a knowledge management tool. There is a trade-off; as WREN is tailored to the specific requirements of the FMD, it may not be easy to implement within a different institution without extensive modifications. However, WREN is successful in not only allowing the FMD to carry out the tasks of maintaining and looking after the built-estate of the university, but also has achieved its aim to minimise costs and maximise efficiency.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
The Environmental Data Abstraction Library provides a modular data management library for bringing new and diverse datatypes together for visualisation within numerous software packages, including the ncWMS viewing service, which already has very wide international uptake. The structure of EDAL is presented along with examples of its use to compare satellite, model and in situ data types within the same visualisation framework. We emphasize the value of this capability for cross calibration of datasets and evaluation of model products against observations, including preparation for data assimilation.
Resumo:
ISO19156 Observations and Measurements (O&M) provides a standardised framework for organising information about the collection of information about the environment. Here we describe the implementation of a specialisation of O&M for environmental data, the Metadata Objects for Linking Environmental Sciences (MOLES3). MOLES3 provides support for organising information about data, and for user navigation around data holdings. The implementation described here, “CEDA-MOLES”, also supports data management functions for the Centre for Environmental Data Archival, CEDA. The previous iteration of MOLES (MOLES2) saw active use over five years, being replaced by CEDA-MOLES in late 2014. During that period important lessons were learnt both about the information needed, as well as how to design and maintain the necessary information systems. In this paper we review the problems encountered in MOLES2; how and why CEDA-MOLES was developed and engineered; the migration of information holdings from MOLES2 to CEDA-MOLES; and, finally, provide an early assessment of MOLES3 (as implemented in CEDA-MOLES) and its limitations. Key drivers for the MOLES3 development included the necessity for improved data provenance, for further structured information to support ISO19115 discovery metadata export (for EU INSPIRE compliance), and to provide appropriate fixed landing pages for Digital Object Identifiers (DOIs) in the presence of evolving datasets. Key lessons learned included the importance of minimising information structure in free text fields, and the necessity to support as much agility in the information infrastructure as possible without compromising on maintainability both by those using the systems internally and externally (e.g. citing in to the information infrastructure), and those responsible for the systems themselves. The migration itself needed to ensure continuity of service and traceability of archived assets.
Resumo:
Recently, two international standard organizations, ISO and OGC, have done the work of standardization for GIS. Current standardization work for providing interoperability among GIS DB focuses on the design of open interfaces. But, this work has not considered procedures and methods for designing river geospatial data. Eventually, river geospatial data has its own model. When we share the data by open interface among heterogeneous GIS DB, differences between models result in the loss of information. In this study a plan was suggested both to respond to these changes in the information envirnment and to provide a future Smart River-based river information service by understanding the current state of river geospatial data model, improving, redesigning the database. Therefore, primary and foreign key, which can distinguish attribute information and entity linkages, were redefined to increase the usability. Database construction of attribute information and entity relationship diagram have been newly redefined to redesign linkages among tables from the perspective of a river standard database. In addition, this study was undertaken to expand the current supplier-oriented operating system to a demand-oriented operating system by establishing an efficient management of river-related information and a utilization system, capable of adapting to the changes of a river management paradigm.
Resumo:
Desenvolvimentos recentes na tecnologia de informação têm proporcionado grandes avanços no gerenciamento dos sistemas de transportes. No mundo já existem várias tecnologias testadas e em funcionamento que estão auxiliando na tarefa de controle da operação do transporte público por ônibus. Esses sistemas geram informações úteis para o planejamento e operação dos sistemas de transportes. No Brasil, os investimentos em tecnologias avançadas ainda são muito modestos e estão focados em equipamentos que auxiliam no controle da evasão da receita. No entanto, percebe-se um crescente interesse, por parte dos órgão gestores e operadores, em implementar sistemas automatizados para auxiliar na melhoria da qualidade dos sistemas de transportes e como forma de aumentar a produtividade do setor. Esse trabalho traz à discussão os sistemas avançados desenvolvidos para o transporte público coletivo, com o objetivo de definir o perfil da tecnologia avançada que está de acordo com as necessidades dos gestores e operadores brasileiros. Na realização do trabalho foi empregada uma ferramenta de planejamento denominada Desdobramento da Função Qualidade – QFD (Quality Function Deployment), bastante utilizada para direcionar os processos de manufatura e produto, e para hierarquizar os atributos considerados importantes para o gerenciamento do transporte público urbano no Brasil. O resultado do trabalho indica um grande interesse em implantar tecnologia avançada para auxiliar no monitoramento dos tempos de viagem e tempos perdidos durante a operação do transporte público. Essa tecnologia também é tida como capaz de melhorar o desempenho das linhas, através da manutenção da regularidade e pontualidade. Ainda, sistemas inteligentes que propiciam informações precisas aos usuários contribuem para melhorar a imagem do modal ônibus.
Resumo:
This work presents the research carried through in the industrial segment of confection of clothes of the Great Natal whose objective is to show the profile, enterprise and technological management as also the use of simultaneous engineering in the development of products. The research approaches two studies. The first one presents the current picture of the companies, synthesized through twelve variable. As, through fifteen variable it shows to the level of use of Simultaneous Engineering in the Development of Products and its amplitude in relation to the Integrated Management using tools CAD, PDM and ERP (Computer Aided Design, Product Management Date, Enterprise Resource Planning). The integration of these systems acts aiming the reduction of the cost and the development time of products. The reached results indicate that simultaneous engineering is a competitive advantage and becomes possible: to reduce the life cycle of the product, to rationalize the resources, to incorporate one high standard of the quality to the process and product as well as to personalize the product to take care of the global market. It is important to note that this work also is considered to contribute for the better understanding of the real companies situation of confection located at the Great Natal and its role in the economy of the State of the Rio Grande do Norte
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Geografia - FCT