992 resultados para functionality


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 8th International Fujaba Days aimed at bringing together Fujaba develop- ers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optische Spektroskopie ist eine sehr wichtige Messtechnik mit einem hohen Potential für zahlreiche Anwendungen in der Industrie und Wissenschaft. Kostengünstige und miniaturisierte Spektrometer z.B. werden besonders für moderne Sensorsysteme “smart personal environments” benötigt, die vor allem in der Energietechnik, Messtechnik, Sicherheitstechnik (safety and security), IT und Medizintechnik verwendet werden. Unter allen miniaturisierten Spektrometern ist eines der attraktivsten Miniaturisierungsverfahren das Fabry Pérot Filter. Bei diesem Verfahren kann die Kombination von einem Fabry Pérot (FP) Filterarray und einem Detektorarray als Mikrospektrometer funktionieren. Jeder Detektor entspricht einem einzelnen Filter, um ein sehr schmales Band von Wellenlängen, die durch das Filter durchgelassen werden, zu detektieren. Ein Array von FP-Filter wird eingesetzt, bei dem jeder Filter eine unterschiedliche spektrale Filterlinie auswählt. Die spektrale Position jedes Bandes der Wellenlänge wird durch die einzelnen Kavitätshöhe des Filters definiert. Die Arrays wurden mit Filtergrößen, die nur durch die Array-Dimension der einzelnen Detektoren begrenzt werden, entwickelt. Allerdings erfordern die bestehenden Fabry Pérot Filter-Mikrospektrometer komplizierte Fertigungsschritte für die Strukturierung der 3D-Filter-Kavitäten mit unterschiedlichen Höhen, die nicht kosteneffizient für eine industrielle Fertigung sind. Um die Kosten bei Aufrechterhaltung der herausragenden Vorteile der FP-Filter-Struktur zu reduzieren, wird eine neue Methode zur Herstellung der miniaturisierten FP-Filtern mittels NanoImprint Technologie entwickelt und präsentiert. In diesem Fall werden die mehreren Kavitäten-Herstellungsschritte durch einen einzigen Schritt ersetzt, die hohe vertikale Auflösung der 3D NanoImprint Technologie verwendet. Seit dem die NanoImprint Technologie verwendet wird, wird das auf FP Filters basierende miniaturisierte Spectrometer nanospectrometer genannt. Ein statischer Nano-Spektrometer besteht aus einem statischen FP-Filterarray auf einem Detektorarray (siehe Abb. 1). Jeder FP-Filter im Array besteht aus dem unteren Distributed Bragg Reflector (DBR), einer Resonanz-Kavität und einen oberen DBR. Der obere und untere DBR sind identisch und bestehen aus periodisch abwechselnden dünnen dielektrischen Schichten von Materialien mit hohem und niedrigem Brechungsindex. Die optischen Schichten jeder dielektrischen Dünnfilmschicht, die in dem DBR enthalten sind, entsprechen einen Viertel der Design-Wellenlänge. Jeder FP-Filter wird einer definierten Fläche des Detektorarrays zugeordnet. Dieser Bereich kann aus einzelnen Detektorelementen oder deren Gruppen enthalten. Daher werden die Seitenkanal-Geometrien der Kavität aufgebaut, die dem Detektor entsprechen. Die seitlichen und vertikalen Dimensionen der Kavität werden genau durch 3D NanoImprint Technologie aufgebaut. Die Kavitäten haben Unterschiede von wenigem Nanometer in der vertikalen Richtung. Die Präzision der Kavität in der vertikalen Richtung ist ein wichtiger Faktor, der die Genauigkeit der spektralen Position und Durchlässigkeit des Filters Transmissionslinie beeinflusst.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seit der Entdeckung der Methyltransferase 2 als hoch konserviertes und weit verbreitetes Enzym sind zahlreiche Versuche zur vollständigen Charakterisierung erfolgt. Dabei ist die biologische Funktion des Proteins ein permanent umstrittener Punkt. In dieser Arbeit wird dnmA als sensitiver Oszillator bezüglich des Zellzyklus und weiterer Einflüsse gezeigt. Insgesamt liegt der Hauptfokus auf der Untersuchung der in vivo Charakterisierung des Gens, der endogenen subzellulären Verteilung, sowie der physiologischen Aufgaben des Proteins in vivo in D. discoideum. Um Hinweise auf Signalwege in vivo zu erhalten, in denen DnmA beteiligt ist, war es zunächst notwendig, eine detaillierte Analyse des Gens anzufertigen. Mit molekularbiologisch äußerst sensitiven Methoden, wie beispielsweise Chromatin‐IP oder qRT‐PCR, konnte ein vollständiges Expressionsprofil über den Zell‐ und Lebenszyklus von D. discoideum angelegt werden. Besonders interessant sind dabei die Ergebnisse eines ursprünglichen Wildtypstammes (NC4), dessen dnmA‐Expressionsprofil quantitativ von anderen Wildtypstämmen abweicht. Auch auf Proteinebene konnten Zellzyklus‐abhängige Effekte von DnmA bestimmt werden. Durch mikroskopische Untersuchungen von verschiedenen DnmA‐GFP‐Stämmen wurden Lokalisationsänderungen während der Mitose gezeigt. Weiterhin wurde ein DnmA‐GFP‐Konstrukt unter der Kontrolle des endogenen Promotors generiert, wodurch das Protein in der Entwicklung eindeutig als Zelltypus spezifisches Protein, nämlich als Präsporen‐ bzw. Sporenspezifisches Protein, identifiziert werden konnte. Für die in vivo Analyse der katalytischen Aktivität des Enzyms konnten nun die Erkenntnisse aus der Charakterisierung des Gens bzw. Proteins berücksichtigt werden, um in vivo Substratkandidaten zu testen. Es zeigte sich, dass von allen bisherigen Substrat Kandidaten lediglich die tRNA^Asp als in vivo Substrat bestätigt werden konnte. Als besondere Erkenntnis konnte hierbei ein quantitativer Unterschied des Methylierungslevels zwischen verschiedenen Wildtypstämmen detektiert werden. Weiterhin wurde die Methylierung sowie Bindung an einen DNA‐Substratkandidaten ermittelt. Es konnte gezeigt werden, dass DnmA äußerst sequenzspezifisch mit Abschnitten des Retrotransposons DIRS‐1 in vivo eine Bindung eingeht. Auch für den Substrakandidaten snRNA‐U2 konnte eine stabile in vitro Komplexbildung zwischen U2 und hDnmt2 gezeigt werden. Insgesamt erfolgte auf Basis der ermittelten Expressionsdaten eine erneute Charakterisierung der Aktivität des Enzyms und der Substrate in vivo und in vitro.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The possibility to develop automatically running models which can capture some of the most important factors driving the urban climate would be very useful for many planning aspects. With the help of these modulated climate data, the creation of the typically used “Urban Climate Maps” (UCM) will be accelerated and facilitated. This work describes the development of a special ArcGIS software extension, along with two support databases to achieve this functionality. At the present time, lacking comparability between different UCMs and imprecise planning advices going along with the significant technical problems of manually creating conventional maps are central issues. Also inflexibility and static behaviour are reducing the maps’ practicality. From experi-ence, planning processes are formed more productively, namely to implant new planning parameters directly via the existing work surface to map the impact of the data change immediately, if pos-sible. In addition to the direct climate figures, information of other planning areas (like regional characteristics / developments etc.) have to be taken into account to create the UCM as well. Taking all these requirements into consideration, an automated calculation process of urban climate impact parameters will serve to increase the creation of homogenous UCMs efficiently.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A better understanding of effects after digestate application on plant community, soil microbial community as well as nutrient and carbon dynamics is crucial for a sustainable grassland management and the prevention of species and functional diversity loss. The specific research objectives of the thesis were: (i) to investigate effects after digestate application on grass species and soil microbial community, especially focussing on nitrogen dynamic in the plant-soil system and to examine the suitability of the digestate from the “integrated generation of solid fuel and biogas from biomass” (IFBB) system as fertilizer (Chapter 3). (ii) to investigate the relationship between plant community and functionality of soil microbial community of extensively managed meadows, taking into account temporal variations during the vegetation period and abiotic soil conditions (Chapter 4). (iii) to investigate the suitability of IFBB-concept implementation as grassland conservation measure for meadows and possible associated effects of IFBB digestate application on plant and soil microbial community as well as soil microbial substrate utilization and catabolic evenness (Chapter 5). Taken together the results indicate that the digestate generated during the IFBB process stands out from digestates of conventional whole crop digestion on the basis of higher nitrogen use efficiency and that it is useful for increasing harvestable biomass and the nitrogen content of the biomass, especially of L. perenne, which is a common species of intensively used grasslands. Further, a medium application rate of IFBB digestate (50% of nitrogen removed with harvested biomass, corresponding to 30 50 kg N ha-1 a-1) may be a possibility for conservation management of different meadows without changing the functional above- and belowground characteristic of the grasslands, thereby offering an ecologically worthwhile alternative to mulching. Overall, the soil microbial biomass and catabolic performance under planted soil was marginally affected by digestate application but rather by soil properties and partly by grassland species and legume occurrence. The investigated extensively managed meadows revealed a high soil catabolic evenness, which was resilient to medium IFBB application rate after a three-year period of application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together. The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge. This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer. A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

”compositions” is a new R-package for the analysis of compositional and positive data. It contains four classes corresponding to the four different types of compositional and positive geometry (including the Aitchison geometry). It provides means for computation, plotting and high-level multivariate statistical analysis in all four geometries. These geometries are treated in an fully analogous way, based on the principle of working in coordinates, and the object-oriented programming paradigm of R. In this way, called functions automatically select the most appropriate type of analysis as a function of the geometry. The graphical capabilities include ternary diagrams and tetrahedrons, various compositional plots (boxplots, barplots, piecharts) and extensive graphical tools for principal components. Afterwards, ortion and proportion lines, straight lines and ellipses in all geometries can be added to plots. The package is accompanied by a hands-on-introduction, documentation for every function, demos of the graphical capabilities and plenty of usage examples. It allows direct and parallel computation in all four vector spaces and provides the beginner with a copy-and-paste style of data analysis, while letting advanced users keep the functionality and customizability they demand of R, as well as all necessary tools to add own analysis routines. A complete example is included in the appendix

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The R-package “compositions”is a tool for advanced compositional analysis. Its basic functionality has seen some conceptual improvement, containing now some facilities to work with and represent ilr bases built from balances, and an elaborated subsys- tem for dealing with several kinds of irregular data: (rounded or structural) zeroes, incomplete observations and outliers. The general approach to these irregularities is based on subcompositions: for an irregular datum, one can distinguish a “regular” sub- composition (where all parts are actually observed and the datum behaves typically) and a “problematic” subcomposition (with those unobserved, zero or rounded parts, or else where the datum shows an erratic or atypical behaviour). Systematic classification schemes are proposed for both outliers and missing values (including zeros) focusing on the nature of irregularities in the datum subcomposition(s). To compute statistics with values missing at random and structural zeros, a projection approach is implemented: a given datum contributes to the estimation of the desired parameters only on the subcompositon where it was observed. For data sets with values below the detection limit, two different approaches are provided: the well-known imputation technique, and also the projection approach. To compute statistics in the presence of outliers, robust statistics are adapted to the characteristics of compositional data, based on the minimum covariance determinant approach. The outlier classification is based on four different models of outlier occur- rence and Monte-Carlo-based tests for their characterization. Furthermore the package provides special plots helping to understand the nature of outliers in the dataset. Keywords: coda-dendrogram, lost values, MAR, missing data, MCD estimator, robustness, rounded zeros

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inicialmente integrada en el piloto de gvSIG Mobile, la librería libLocation tiene como objetivo dotar a los proyectos gvSIG Desktop y gvSIG Mobile un acceso transparente a fuentes de localización. La librería se fundamenta en las especificaciones JSR-179 -API de localización para J2ME- y JSR-293 -API de localización para J2ME v2.0-, proporcionando una interfaz uniforme a diferentes fuentes de localización, mediante funciones de alto nivel. Asimismo, se extiende la funcionalidad de estas APIs para permitir la gestión de datos específicos del tipo de fuente de localización y el ajuste de parámetros de bajo nivel, además de incorporar métodos de localización adicionales, como la aplicación de correcciones vía protocolo NTRIP. La librería libLocation está actualmente en proceso de desarrollo y será publicada y liberada junto con la versión definitiva de gvSIG Mobile. Junto con libLocation se están desarrollando extensiones que permiten el acceso a esta librería desde gvSIG Desktop y gvSIG Mobile

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypermedia systems based on the Web for open distance education are becoming increasingly popular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigational adaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las recientes crisis económicas que ha vivido el mundo nos llevan a hacer una reflexión en torno a las responsabilidades que tienen los dirigentes empresariales en los cambios económicos e incluso sociales que se están viviendo. Es por tanto el momento para hacer un análisis más profundo del aporte que la administración, como ciencia, ha hecho a la sociedad. Se ha identificado que la administración permite el desarrollo y crecimiento de un individuo, al tiempo que es un medio de crecimiento y desarrollo para una comunidad, una región y una cultura, lo que se convierte en una realidad indiscutible que nos lleva a reflexionar acerca del porqué, ocurren problemas como las quiebras, ausencia de recursos, y conflictos internos en las organizaciones. En esa medida, una reflexión puede ser que los ideales que ha venido liderando la administración han sido desvirtuados y apartados de una moral y ética empresarial que permita al directivo pensar en algo más que utilidades. Podría parecer extraño decir que la empresa y la acción gerencial deben ir más allá. Podríamos iniciar con el análisis de la obsolescencia de los ideales administrativos o incluso en la ausencia de ideales en la administración. Sobre si se está teniendo en cuenta que la administración que conocemos y aprendemos sigue vigente en un mundo que ha evolucionado y cambiado, el mundo de hoy es distinto al que conocimos incluso en nuestra época de formación profesional, pues se nos enseñó a pensar linealmente, causa y efecto, todo cumple un orden y un plan predeterminado, hoy la visión de certidumbre y control del hombre sobre las decisiones son guiadas por el azar, y por más que el hombre trabaja y diseña estrategias, éstas serán transformadas de acuerdo a las emergencias. Por lo anterior, es importante entender que para avanzar y dejar la obsolescencia en los ideales que ha definido la administración, los directivos deben cambiar su visón y entender la dinámica de interrelación de sus organizaciones en un mundo en permanente caos. Debemos estudiar desde dinámicas diferentes a la de funcionalidad, y saber que es momento para que la administración tenga una nueva visión de la gestión administrativa, por lo que debemos ampliar nuestro horizonte y ver a través de la complejidad. Las decisiones que toman los directivos, las estrategias que definen y la forma de relacionarse con la competencia en mercados altamente competitivos requiere de un cambio en la administración y de un marco ético muy distinto, debe concebirse la gestión directiva y administrativa para mantener y conservar la vida del sector, de la empresa y, por ende, mantener el medio que nos rodea. Por ello se propone ejercer la administración desde la bioética.