349 resultados para Reusable Passwords


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The demands towards the contemporary information systems are constantly increasing. In a dynamic business environment an organization has to be prepared for sudden growth, shrinking or other type of reorganization. Such change would bring the need of adaptation of the information system, servicing the company. The association of access rights to parts of the system with users, groups of users, user roles etc. is of great importance to defining the different activities in the company and the restrictions of the access rights for each employee, according to his status. The mechanisms for access rights management in a system are taken in account during the system design. In most cases they are build in the system. This paper offers an approach in user rights framework development that is applicable in information systems. This work presents a reusable extendable mechanism that can be integrated in information systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the digital age the internet and the ICT devices changed our daily life and routines. It means we couldn't live without these services and devices anywhere (work, home, holiday, etc.). It can be experienced in the tourism sector; digital contents become key tools in the tourism of the 21st century; they will be able to adapt the traditional tourist guide methodology to the applications running on novel digital devices. Tourists belong to a new generation, an "ICT generation" using innovative tools, a new info-media to communicate. A possible direction for tourism development is to use modern ICT systems and devices. Besides participating in classical tours guided by travel guides, there is a new opportunity for individual tourists to enjoy high quality ICT based guided walks prepared on the knowledge of travel guides. The main idea of the GUIDE@HAND service is to use reusable, and create new tourism contents for an advanced mobile device, in order to give a contemporary answer to traditional systems of tourism information, by developing new tourism services based on digital contents for innovative mobile applications. The service is based on a new concept of enhancing territorial heritage and values, through knowledge, innovation, languages and multilingual solutions going along with new tourists‟ “sensitiveness”.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An integrated production–recycling system is investigated. A constant demand can be satisfied by production and recycling. The used items might be bought back and then recycled. The not recycled products are disposed off. Two types of models are analyzed. The first model examines and minimizes the EOQ related cost. The second model generalizes the first one by introducing additionally linear waste disposal, recycling, production and buyback costs. This basic model was examined by the authors in a previous paper. The main results are that a pure strategy (either production or recycling) is optimal. This paper extends the model for the case of quality consideration: it is asked for the quality of the bought back products. In the former model we have assumed that all returned items are serviceable. One can put the following question: Who should control the quality of the returned items? If the suppliers examine the quality of the reusable products, then the buyback rate is strongly smaller than one, α<1. If the user does it, then not all returned items are recyclable, i.e. the use rate is smaller than one, δ<1. Which one of the control systems are more cost advantageous in this case?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A dolgozat a visszutas logisztikát, az újrahasznosítást igyekszik beilleszteni a vállalati termeléstervezés keretei közé. A szükséglettervezési rendszerek (material requirements planning, MRP) célja a készletek és beszerzendő anyagok, alkatrészek időben ütemezett gyártásának és beszerzésének megtervezése. A klasszikus MRP rendszereket az utóbbi időben próbálja a tudomány az újrahasznosítással kibővíteni. Mivel ebben az esetben az új, és újrafelhasználható anyagokat külön kell nyilvántartani, ezért az MRP-táblák és készletek növekednek. A rendelési tételnagyságok meghatározása is nehezebb, összetettebb tételnagysághoz vezet. A dolgozatban egy visszutas logisztikai készletmodellt ismertetünk, valamint annak dinamikus kiterjesztését, amely alapja lehet az SAP-ba beépíthető rendelés állomány meghatározó heurisztikának. ____ The aim of the paper is to extend production planning with reverse logistics and reuse. Material requirements planning (MRP) systems plan and control invetory levels and purchasing activities of the firm. In the last decade scientists on this field try to involve reverse logistics activities in MRP systems. Size of MRP-tables is growing in this case because of the alternative use of newly purchased products and reusable old items. Determination of order quantities will be more complex with these two modes of material supplies. An EOQ-type reverse logistics model is presented in the paper with a dynamic lot size generalization. The generalized model can be seen as a basic model to build in production planning and control system like SAP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed applications are exposed as reusable components that are dynamically discovered and integrated to create new applications. These new applications, in the form of aggregate services, are vulnerable to failure due to the autonomous and distributed nature of their integrated components. This vulnerability creates the need for adaptability in aggregate services. The need for adaptation is accentuated for complex long-running applications as is found in scientific Grid computing, where distributed computing nodes may participate to solve computation and data-intensive problems. Such applications integrate services for coordinated problem solving in areas such as Bioinformatics. For such applications, when a constituent service fails, the application fails, even though there are other nodes that can substitute for the failed service. This concern is not addressed in the specification of high-level composition languages such as that of the Business Process Execution Language (BPEL). We propose an approach to transparently autonomizing existing BPEL processes in order to make them modifiable at runtime and more resilient to the failures in their execution environment. By transparent introduction of adaptive behavior, adaptation preserves the original business logic of the aggregate service and does not tangle the code for adaptive behavior with that of the aggregate service. The major contributions of this dissertation are: first, we assessed the effectiveness of BPEL language support in developing adaptive mechanisms. As a result, we identified the strengths and limitations of BPEL and came up with strategies to address those limitations. Second, we developed a technique to enhance existing BPEL processes transparently in order to support dynamic adaptation. We proposed a framework which uses transparent shaping and generative programming to make BPEL processes adaptive. Third, we developed a technique to dynamically discover and bind to substitute services. Our technique was evaluated and the result showed that dynamic utilization of components improves the flexibility of adaptive BPEL processes. Fourth, we developed an extensible policy-based technique to specify how to handle exceptional behavior. We developed a generic component that introduces adaptive behavior for multiple BPEL processes. Fifth, we identify ways to apply our work to facilitate adaptability in composite Grid services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are authentication models which use passwords, keys, personal identifiers (cards, tags etc) to authenticate a particular user in the authentication/identification process. However, there are other systems that can use biometric data, such as signature, fingerprint, voice, etc., to authenticate an individual in a system. In another hand, the storage of biometric can bring some risks such as consistency and protection problems for these data. According to this problem, it is necessary to protect these biometric databases to ensure the integrity and reliability of the system. In this case, there are models for security/authentication biometric identification, for example, models and Fuzzy Vault and Fuzzy Commitment systems. Currently, these models are mostly used in the cases for protection of biometric data, but they have fragile elements in the protection process. Therefore, increasing the level of security of these methods through changes in the structure, or even by inserting new layers of protection is one of the goals of this thesis. In other words, this work proposes the simultaneous use of encryption (Encryption Algorithm Papilio) with protection models templates (Fuzzy Vault and Fuzzy Commitment) in identification systems based on biometric. The objective of this work is to improve two aspects in Biometric systems: safety and accuracy. Furthermore, it is necessary to maintain a reasonable level of efficiency of this data through the use of more elaborate classification structures, known as committees. Therefore, we intend to propose a model of a safer biometric identification systems for identification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Everyday Millions of disposable plates, cups and utensils are used in fast food establishments, cafeterias, restaurants and homes worldwide. These single-use disable plates, cup and utensils, when of polystyrene or plastic, do not biodegrade and decompose like fruit, vegetables or meat; they only breakdown into smaller pieces on a physical level. This lack of decomposition means that these products persist and accumulate in landfills consuming the available space and contaminate the surrounding area. With an ever growing global population, the disposable waste generated annually is increasing and landfills worldwide are rapidly filling. Therefore, more landfills are needed sooner but they are expensive to create, they consume a large amount of usable space and can harm the environment. In order to reduce the dependence on landfills, the waste can be diverted through recycling programs, reducing human consumption and purchasing reusable and/or compostable materials. These methods of waste reduction would be implemented at the municipal level but it would be possible to change provincial and state legislation so that municipalities would be required to do so rather than of their own volition. If initiated worldwide than the amount of waste produced by humans would be greatly reduced and the dependence on landfills would decrease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This presentation explains how RAGE develops reusable game technology components and provides examples of their application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Community-driven Question Answering (CQA) systems that crowdsource experiential information in the form of questions and answers and have accumulated valuable reusable knowledge. Clustering of QA datasets from CQA systems provides a means of organizing the content to ease tasks such as manual curation and tagging. In this paper, we present a clustering method that exploits the two-part question-answer structure in QA datasets to improve clustering quality. Our method, {\it MixKMeans}, composes question and answer space similarities in a way that the space on which the match is higher is allowed to dominate. This construction is motivated by our observation that semantic similarity between question-answer data (QAs) could get localized in either space. We empirically evaluate our method on a variety of real-world labeled datasets. Our results indicate that our method significantly outperforms state-of-the-art clustering methods for the task of clustering question-answer archives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MAIDL, André Murbach; CARVILHE, Claudio; MUSICANTE, Martin A. Maude Object-Oriented Action Tool. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2008.