144 resultados para reusability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aspergillus phoenicis biofilms on polyethylene as inert support were used to produce fructooligosaccharides (FOS) in media containing 25% (m/V) of sucrose as a carbon source. The maximum production of total FOS (122 mg/mL), with 68% of 1-kestose and 32% of nystose, was obtained in Khanna medium maintained at 30 degrees C for 48 h under orbital agitation (100 rpm). At high concentrations of sucrose (30%, m/V), the recovery of FOS was higher than that observed at a low concentration (5%, m/V). High levels of FOS (242 mg/mL) were also recovered when using the biofilm in sodium acetate buffer with high sucrose concentration (50%, m/V) for 10 h. When the dried biofilm was reused in a fresh culture medium, there was a recovery of approx. 13.7% of total FOS after 72 h of cultivation at 30 C, and 10% corresponded to 1-kestose. The biofilm morphology, analyzed by scanning electron microscope, revealed a noncompact mycelium structure, with unfilled spaces and channels present among the hyphae. The results obtained in this study show that A. phoenicis biofilms may find application for FOS production in a single-step fermentation process, which is cost-effective in terms of reusability, downstream processing and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Präsentiert wird ein vollständiger, exakter und effizienter Algorithmus zur Berechnung des Nachbarschaftsgraphen eines Arrangements von Quadriken (Algebraische Flächen vom Grad 2). Dies ist ein wichtiger Schritt auf dem Weg zur Berechnung des vollen 3D Arrangements. Dabei greifen wir auf eine bereits existierende Implementierung zur Berechnung der exakten Parametrisierung der Schnittkurve von zwei Quadriken zurück. Somit ist es möglich, die exakten Parameterwerte der Schnittpunkte zu bestimmen, diese entlang der Kurven zu sortieren und den Nachbarschaftsgraphen zu berechnen. Wir bezeichnen unsere Implementierung als vollständig, da sie auch die Behandlung aller Sonderfälle wie singulärer oder tangentialer Schnittpunkte einschließt. Sie ist exakt, da immer das mathematisch korrekte Ergebnis berechnet wird. Und schließlich bezeichnen wir unsere Implementierung als effizient, da sie im Vergleich mit dem einzigen bisher implementierten Ansatz gut abschneidet. Implementiert wurde unser Ansatz im Rahmen des Projektes EXACUS. Das zentrale Ziel von EXACUS ist es, einen Prototypen eines zuverlässigen und leistungsfähigen CAD Geometriekerns zu entwickeln. Obwohl wir das Design unserer Bibliothek als prototypisch bezeichnen, legen wir dennoch größten Wert auf Vollständigkeit, Exaktheit, Effizienz, Dokumentation und Wiederverwendbarkeit. Über den eigentlich Beitrag zu EXACUS hinaus, hatte der hier vorgestellte Ansatz durch seine besonderen Anforderungen auch wesentlichen Einfluss auf grundlegende Teile von EXACUS. Im Besonderen hat diese Arbeit zur generischen Unterstützung der Zahlentypen und der Verwendung modularer Methoden innerhalb von EXACUS beigetragen. Im Rahmen der derzeitigen Integration von EXACUS in CGAL wurden diese Teile bereits erfolgreich in ausgereifte CGAL Pakete weiterentwickelt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In der marinen Grenzschicht beeinflussen reaktive Iodspezies wie z.B. I2 sowie aliphatische Amine eine Vielzahl atmosphärischer Prozesse, vor allem bei der Partikelneubildung spielen sie eine entscheidende Rolle. Allerdings stellt die Quantifizierung dieser Verbindungen im Spurenbereich immer noch eine große analytische Herausforderung dar. rnAus diesem Grund wurde im Rahmen der vorliegenden Arbeit das GTRAP-AMS (Gaseous compound trapping in artificially generated particles – aerosol mass spectrometry) entwickelt, um gasförmiges I2 und aliphatische Amine zu bestimmen. Hierbei wird ein Flugzeit-Aerosolmassenspektrometer (ToF-AMS), das ursprünglich für die on-line Charakterisierung von Aerosolen entwickelt wurde, mit einer GTRAP-Einheit gekoppelt. Im Fall von I2 werden mit Hilfe eines pneumatischen Zerstäubers a-Cyclodextrin/NH4Br-Partikel erzeugt, die mit dem gasförmigen I2 innerhalb der GTRAP-Einheit eine Einschlussverbindung bilden und dieses dadurch selektiv in die Partikelphase aufnehmen. Für die on-line Bestimmung gasförmiger aliphatischer Amine dagegen wurde Phosphorsäure als partikulärer Reaktionspartner eingesetzt. Nach Optimierung des GTRAP-AMS Systems wurde sowohl für I2 als auch für die aliphatischen Amine eine Nachweisgrenze im sub-ppb-Bereich für eine Zeitauflösung zwischen 1 und 30 min erhalten. Als erstes wurde das GTRAP-AMS System zur Charakterisierung von Permanentdenudern eingesetzt, um deren I2-Aufnahmefähigkeit und Wiederverwendbarkeit im Vergleich zu den herkömmlichen einmal verwendbaren a-Cyclodextrin Denudern zu testen.rnIm Anschluss daran wurde das GTRAP-AMS für die Bestimmung zeitlich aufgelöster I2- Emissionsraten ausgewählter Makroalgen unter dem Einfluss von Ozon eingesetzt. Die Kenntnis der Emissionsraten iodhaltiger Verbindungen der wichtigsten weltweit vorkommenden Makroalgen ist für die Modellierung der Iodchemie in der marinen Grenzschicht von besonderer Bedeutung. Die Resultate zeigen, dass verschiedene Makroalgen sowohl unterschiedliche zeitlich aufgelöste I2-Emissionsprofile als auch Gesamtemissionsraten liefern. Im Vergleich zu den iodorganischen Verbindungen ist die Gesamtemissionsrate an I2 allerdings eine bis zwei Größenordnungen größer. Dies und die deutlich kürzere atmosphärische Lebensdauer von I2 im Vergleich zu den iodorganischen Verbindungen führen dazu, dass I2 die dominierende iodhaltige Verbindung für die Bildung reaktiver Iodatome in der marinen Grenzschicht ist. rnDa über dem tropischen Atlantischen Ozean bislang jedoch nur ein geringer Anteil der IO-Konzentration durch die Oxidation von iodorganischen Verbindungen erklärt werden kann, wurden weitere Quellen für I2 erforscht. Deshalb wurden Kammerexperimente mit Mikrolagen durchgeführt, um deren Einfluss auf die I2-Freisetzung in die Atmosphäre zu untersuchen. Hierbei konnte gezeigt werden, dass die Anwesenheit von Mikroalgen (z.B. Coscinodiscus Wailesii) im Meerwasser zu einer erhöhten Freisetzung von I2 aus dem Meerwasser in die Atmosphäre führen kann. rnDes Weiteren wurden auch Versuche zu abiotischen Bildungswegen von I2 durchgeführt. Die Ergebnisse der Atmosphärensimulationsexperimente haben gezeigt, dass partikuläre Iodoxide durch organische Verbindungen zu I2 reduziert werden können, welches im Anschluss von der Partikelphase in die Gasphase übergehen kann und dort wieder für Gasphasenprozesse zur Verfügung steht.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

eLearning supports the education in certain disciplines. Here, we report about novel eLearning concepts, techniques, and tools to support education in Software Engineering, a subdiscipline of computer science. We call this "Software Engineering eLearning". On the other side, software support is a substantial prerequisite for eLearning in any discipline. Thus, Software Engineering techniques have to be applied to develop and maintain those software systems. We call this "eLearning Software Engineering". Both aspects have been investigated in a large joint, BMBF-funded research project, termed MuSofT (Multimedia in Software Engineering). The main results are summarized in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A web service is a collection of industry standards to enable reusability of services and interoperability of heterogeneous applications. The UMLS Knowledge Source (UMLSKS) Server provides remote access to the UMLSKS and related resources. We propose a Web Services Architecture that encapsulates UMLSKS-API and makes it available in distributed and heterogeneous environments. This is the first step towards intelligent and automatic UMLS services discovery and invocation by computer systems in distributed environments such as web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually, water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a programmer has to create water tailored to each individual island. Such an approach is fragile, however, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by a programmer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing - bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. We integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Specification consortia and standardization bodies concentrate on e-Learning objects to en-sure reusability of content. Learning objects may be collected in a library and used for deriv-ing course offerings that are customized to the needs of different learning communities. How-ever, customization of courses is possible only if the logical dependencies between the learn-ing objects are known. Metadata for describing object relationships have been proposed in several e-Learning specifications. This paper discusses the customization potential of e-Learning objects but also the pitfalls that exist if content is customized inappropriately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Course materials for e-learning are a special type of information system (IS). Thus, in the development of educational material one may learn from principles, methods, and tools that originated in the Software Engineering (SE) discipline and that are relevant in similar ways in "Instructional Engineering". An important SE principle is mo dularization, which supports properties like reusability and adaptability of code. To foster the adaptability of courseware we present a concept in which learning material is organized as a library of modular course objects. A certain lecturer may customize the courseware according to his specific course requirements. He must consider logical dependencies of and relationship integrity between selected course objects. We discuss integrity issues that have to be regarded for the composition of consistent course materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo surge en respuesta a la necesidad de que la revista Olivar cuente con un índice acumulativo que permita agrupar los artículos de los 16 números por tema, autor y fecha. En un primer momento se analizó el archivo que contiene los descriptores asignados a cada artículo por el área de procesos técnicos de la Facultad de Humanidades y Ciencias de la Educación (FaHCE) para evaluar su posible reutilización con este fin. Pero la falta de consistencia y normalización en la descripción hizo que se los desestimara y se procedió a indizar, nuevamente, los 283 registros que componen la colección de artículos de Olivar en el período 2001-2012, con un vocabulario de términos controlados que además dotara de especificidad la descripción. Los términos obtenidos durante este proceso se ordenaron alfabéticamente y pueden ser reutilizados por los autores que asignan las palabras claves a través de la plataforma OJS de reciente adquisición por parte de la biblioteca y para construir el mencionado índice acumulativo. Por último, se testearon algunos de los descriptores asignados en los motores de búsqueda de la Biblioteca de la Facultad de Humanidades y Ciencias de la Educación (BIBHUMA) y del portal Scielo con lo que se pudo corroborar la importancia de los procesos técnicos para dar visibilidad a las publicaciones facilitando el acceso a las mismas de una comunidad mayor de lectores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo surge en respuesta a la necesidad de que la revista Olivar cuente con un índice acumulativo que permita agrupar los artículos de los 16 números por tema, autor y fecha. En un primer momento se analizó el archivo que contiene los descriptores asignados a cada artículo por el área de procesos técnicos de la Facultad de Humanidades y Ciencias de la Educación (FaHCE) para evaluar su posible reutilización con este fin. Pero la falta de consistencia y normalización en la descripción hizo que se los desestimara y se procedió a indizar, nuevamente, los 283 registros que componen la colección de artículos de Olivar en el período 2001-2012, con un vocabulario de términos controlados que además dotara de especificidad la descripción. Los términos obtenidos durante este proceso se ordenaron alfabéticamente y pueden ser reutilizados por los autores que asignan las palabras claves a través de la plataforma OJS de reciente adquisición por parte de la biblioteca y para construir el mencionado índice acumulativo. Por último, se testearon algunos de los descriptores asignados en los motores de búsqueda de la Biblioteca de la Facultad de Humanidades y Ciencias de la Educación (BIBHUMA) y del portal Scielo con lo que se pudo corroborar la importancia de los procesos técnicos para dar visibilidad a las publicaciones facilitando el acceso a las mismas de una comunidad mayor de lectores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the acknowledged need of providing a personalized and adaptive learning process for all, current learning management systems do not properly cover personalization and accessibility issues and they are still struggling to support the reusability requirements coming from the pervasive usage of standards. There is a lack of frameworks for providing layered-based infrastructure covering the interoperability required to manage the whole range of standards, applications and services needed to meet accessibility and adaptations needs of lifelong learning services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.