821 resultados para Shengen Information Systems (SIS)
Resumo:
Use of modern object-oriented methods of designing of information systems (IS) both descriptions of interrelations IS and automated with its help business-processes of the enterprises leads to necessity of construction uniform complete IS on the basis of set of local models of such system. As a result of use of such approach there are the contradictions caused by inconsistency of actions of separate developers IS with each other and that is much more important, inconsistency of the points of view of separate users IS. Besides similar contradictions arise while in service IS at the enterprise because of constant change separate business- processes of the enterprise. It is necessary to note also, that now overwhelming majority IS is developed and maintained as set of separate functional modules. Each of such modules can function as independent IS. However the problem of integration of separate functional modules in uniform system can lead to a lot of problems. Among these problems it is possible to specify, for example, presence in modules of functions which are not used by the enterprise to destination, to complexity of information and program integration of modules of various manufacturers, etc. In most cases these contradictions and the reasons, their caused, are consequence of primary representation IS as equilibrium steady system. In work [1] representation IS as dynamic multistable system which is capable to carry out following actions has been considered:
Resumo:
Intrusion detection is a critical component of security information systems. The intrusion detection process attempts to detect malicious attacks by examining various data collected during processes on the protected system. This paper examines the anomaly-based intrusion detection based on sequences of system calls. The point is to construct a model that describes normal or acceptable system activity using the classification trees approach. The created database is utilized as a basis for distinguishing the intrusive activity from the legal one using string metric algorithms. The major results of the implemented simulation experiments are presented and discussed as well.
Resumo:
Никола Вълчанов, Тодорка Терзиева, Владимир Шкуртов, Антон Илиев - Една от основните области на приложения на компютърната информатика е автоматизирането на математическите изчисления. Информационните системи покриват различни области като счетоводство, електронно обучение/тестване, симулационни среди и т. н. Те работят с изчислителни библиотеки, които са специфични за обхвата на системата. Въпреки, че такива системи са перфектни и работят безпогрешно, ако не се поддържат остаряват. В тази работа описваме механизъм, който използва динамично библиотеките за изчисления и взема решение по време на изпълнение (интелигентно или интерактивно) за това как и кога те да се използват. Целта на тази статия е представяне на архитектура за системи, управлявани от изчисления. Тя се фокусира върху ползите от използването на правилните шаблони за дизайн с цел да се осигури разширяемост и намаляване на сложността.
Resumo:
In many e-commerce Web sites, product recommendation is essential to improve user experience and boost sales. Most existing product recommender systems rely on historical transaction records or Web-site-browsing history of consumers in order to accurately predict online users’ preferences for product recommendation. As such, they are constrained by limited information available on specific e-commerce Web sites. With the prolific use of social media platforms, it now becomes possible to extract product demographics from online product reviews and social networks built from microblogs. Moreover, users’ public profiles available on social media often reveal their demographic attributes such as age, gender, and education. In this paper, we propose to leverage the demographic information of both products and users extracted from social media for product recommendation. In specific, we frame recommendation as a learning to rank problem which takes as input the features derived from both product and user demographics. An ensemble method based on the gradient-boosting regression trees is extended to make it suitable for our recommendation task. We have conducted extensive experiments to obtain both quantitative and qualitative evaluation results. Moreover, we have also conducted a user study to gauge the performance of our proposed recommender system in a real-world deployment. All the results show that our system is more effective in generating recommendation results better matching users’ preferences than the competitive baselines.
Resumo:
In the agrifood sector, the explosive increase in information about environmental sustainability, often in uncoordinated information systems, has created a new form of ignorance ('meta-ignorance') that diminishes the effectiveness of information on decision-makers. Flows of information are governed by informal and formal social arrangements that we can collectively call Informational Institutions. In this paper, we have reviewed the recent literature on such institutions. From the perspectives of information theory and new institutional economics, current informational institutions are increasing the information entropy of communications concerning environmental sustainability and stakeholders' transaction costs of using relevant information. In our view this reduces the effectiveness of informational governance. Future research on informational governance should explicitly address these aspects.
Resumo:
Az elemzés egy, a Budapesti Corvinus Egyetem (BCE) Logisztika és Ellátási Lánc Menedzsment Tanszéke által végzett kérdőíves felmérés eredményeit foglalja össze. A kutatás alapvető célja, hogy felmérje és bemutassa a hazai vállalatok logisztikai, ezen belül is elsősorban disztribúciós logisztikai folyamatainak informatikai oldalról történő jelenlegi támogatottsági szintjét és a következő két-három év e téren várható fejlesztési irányait. A kutatás szisztematikusan kitért a logisztikai információs rendszer valamennyi alrendszerére, vizsgálta a különböző azonosítási megoldások elterjedtségét, a vállalatirányítási rendszer, illetve egyes moduljainak használatával kapcsolatban kialakult gyakorlatot, de a logisztika stratégiai döntéseinek informatikai támogatottságát és a használt kommunikációs technikákat is. Összességében megállapíthatjuk, hogy a logisztikai információs rendszerek fejlettségi szintje ma Magyarországon közepesnek mondható, fontos megjegyezni azonban, hogy a KKV szektor e téren is jelentős lemaradással rendelkezik. Ez természetesen azt is jelent, hogy az informatikai eszközök alkalmazásának kiterjesztésével még jelentős teljesítményjavulás érhető el. = The essay summarizes the results of a survey carried out by Corvinus University of Budapest, Department of Logistics and Supply Chain Management. Aim of the survey was to analyze and describe the actual Hungarian company practice regarding the IT support of logistics – and particularly distribution – processes, and the plans of developing it within the next 2-3 years. Survey has systematically covered all fields of logistics information system, analyzed the prevalence of different identification techniques and systems. On the whole we appoint that logistics information systems applied by Hungarian companies are on satisfactory level; however it is important to tell that SME companies are in huge lag. This means that improving logistics information system hides the possibility of considerable performance development.
Resumo:
Az elemzés egy a Budapesti Corvinus Egyetem (BCE) Logisztika és Ellátási Lánc Menedzsment Tanszéke által végzett kérdőíves felmérés eredményeit foglalja össze. A kutatás alapvető célja, hogy felmérje és bemutassa a hazai vállalatok logisztikai, ezen belül is elsősorban disztribúciós logisztikai folyamatainak informatikai oldalról történő jelenlegi támogatottsági szintjét és a következő két-három év e téren várható fejlesztési irányait. A kutatás szisztematikusan kitért a logisztikai információs rendszer valamennyi alrendszerére, vizsgálta a különböző azonosítási megoldások elterjedtségét, a vállalatirányítási rendszer, illetve egyes moduljainak használatával kapcsolatban kialakult gyakorlatot, de a logisztika stratégiai döntéseinek informatikai támogatottságát és a használt kommunikációs technikákat is. Összességében megállapítható, hogy a logisztikai információs rendszerek fejlettségi szintje ma Magyarországon közepes, fontos megjegyezni azonban, hogy a kkv-szektor e téren is jelentősen lemaradt. Ez természetesen azt is jelenti, hogy az informatikai eszközök alkalmazásának kiterjesztésével még komoly teljesítményjavulás érhető el. ________ The essay summarizes the results of a survey carried out by Corvinus University of Budapest, Department of Logistics and Supply Chain Management. Aim of the survey was to analyze and describe the actual Hungarian company practice regarding the IT support of logistics – and particularly distribution – processes, and the plans to develop it within the next 2-3 years. Survey has systematically overviewed all fields of logistics information system, analyzed the prevalence of different identification techniques and systems. Generally the authors appoint that logistics information systems applied by Hungarian companies are on satisfactory level; however it is important to tell that SME companies are in huge lag. This means that improving logistics information system hides the possibility of considerable performance development.
Resumo:
Néhány éve vonult be a köztudatba a cloud computing fogalom, mely ma már a szakirodalomban és az informatikai alkalmazásokban is egyre nagyobb teret foglal el. Ez az új IT-technológia a számítási felhő számítástechnikai szolgáltatásaihoz kapcsolódó ERP-rendszerek szabványosítását, elterjedését eredményezi. A szerzők cikkükben áttekintést adnak a cloud computing mai helyzetéről és a számítási felhőben működő adatfeldolgozó rendszerekkel (kiemelten ERP) kapcsolatos felhasználói elvárásokról, illetve kezdeti, németországi alkalmazási tapasztalatokról. Külön tárgyalják az ERP-rendszerek új kiválasztási céljait és kritériumait, melyek a felhőkörnyezet speciális lehetőségei miatt alakultak ki. _____ The concept of ‘Cloud’ as an IT notion emerged in the past years and proliferated within the business and IT professional community. The concept of cloud gained awareness both in the professional and scientific literature and in the practice of IT/IS world. The cloud has a profound impact on the Business Information Systems, especially on ERP systems. Indirectly, the cloud leads to a massive standardization on ERP systems and their services. In this paper, the authors provide a literature overview about the current situation of Cloud Computing and the requirements established by end-users against the other data processing facilities and systems, outstandingly the ERP systems. The majority of investigated cases are based on samples from Germany. Furthermore, the initial experiences of application are discussed. Separately, the recent selection objectives and criteria for ERP systems are investigated that came into existence because the appearance of Cloud in the IT environment.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^
Resumo:
Online learning systems (OLS) have become center stage for corporations and educational institutions as a competitive tool in the knowledge economy. The satisfaction construct has received extensive coverage in information systems literature as an indicator of effectiveness but has been criticized for lack of validity; yet, the value construct has been largely ignored, although it has a long history in psychology, sociology, and behavioral science. The purpose of this dissertation is to investigate the value and satisfaction constructs in the context of OLS, and their perceived by learners relationship for implied effectiveness of OLS. ^ First, a qualitative phase is employed to gather OLS values from learners' focus groups, followed by a pilot phase to refine a proposed instrument, and a main phase to validate the survey. Responses were received from 75 students in four focus groups, 141 in the pilot, and 207 the main survey. Extensive data cleaning and exploratory factor analysis were done to identify factors of learners' perceived value and satisfaction of OLS. Then, Value-Satisfaction grids and the Learners' Value Index of Satisfaction (LeVIS) were developed as benchmarking tools of OLS. Moreover, Multicriteria Decision Analysis (MCDA) techniques were employed to impute value from satisfaction scores in order to reduce survey response time. ^ The results provided four satisfaction and four value factors with high reliability (Cronbach's α). Moreover, value and satisfaction were found to have low linear and nonlinear correlations, indicating that they are two distinct uncorrelated constructs. This is consistent with the literature. Value-Satisfaction grids and the LeVIS index indicated relatively high effectiveness for technology and support characteristics, relatively low effectiveness for professor's characteristics, while course and learner characteristics indicated average effectiveness. ^ The main contributions of this study include identifying, defining, and articulating the relationship between value and satisfaction constructs as assessment of users' implied IS effectiveness, as well as assessing the accuracy of MCDA procedures to predict value scores, thus reducing by half the survey questionnaire size. ^
Resumo:
Security remains a top priority for organizations as their information systems continue to be plagued by security breaches. This dissertation developed a unique approach to assess the security risks associated with information systems based on dynamic neural network architecture. The risks that are considered encompass the production computing environment and the client machine environment. The risks are established as metrics that define how susceptible each of the computing environments is to security breaches. ^ The merit of the approach developed in this dissertation is based on the design and implementation of Artificial Neural Networks to assess the risks in the computing and client machine environments. The datasets that were utilized in the implementation and validation of the model were obtained from business organizations using a web survey tool hosted by Microsoft. This site was designed as a host site for anonymous surveys that were devised specifically as part of this dissertation. Microsoft customers can login to the website and submit their responses to the questionnaire. ^ This work asserted that security in information systems is not dependent exclusively on technology but rather on the triumvirate people, process and technology. The questionnaire and consequently the developed neural network architecture accounted for all three key factors that impact information systems security. ^ As part of the study, a methodology on how to develop, train and validate such a predictive model was devised and successfully deployed. This methodology prescribed how to determine the optimal topology, activation function, and associated parameters for this security based scenario. The assessment of the effects of security breaches to the information systems has traditionally been post-mortem whereas this dissertation provided a predictive solution where organizations can determine how susceptible their environments are to security breaches in a proactive way. ^
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
The Bahamas is a small island nation that is dealing with the problem of freshwater shortage. All of the country’s freshwater is contained in shallow lens aquifers that are recharged solely by rainfall. The country has been struggling to meet the water demands by employing a combination of over-pumping of aquifers, transport of water by barge between islands, and desalination of sea water. In recent decades, new development on New Providence, where the capital city of Nassau is located, has created a large area of impervious surfaces and thereby a substantial amount of runoff with the result that several of the aquifers are not being recharged. A geodatabase was assembled to assess and estimate the quantity of runoff from these impervious surfaces and potential recharge locations were identified using a combination of Geographic Information Systems (GIS) and remote sensing. This study showed that runoff from impervious surfaces in New Providence represents a large freshwater resource that could potentially be used to recharge the lens aquifers on New Providence.
Resumo:
Although there are more than 7,000 properties using lodging yield management systems (LYMSs), both practitioners and researchers alike have found it difficult to measure their success. Considerable research was performed in the 1980s to develop success measures for information systems in general. In this work the author develops success measures specifically for LYMSs.
Resumo:
Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.