978 resultados para functionality
Resumo:
In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.
Resumo:
Der Einsatz der Particle Image Velocimetry (PIV) zur Analyse selbsterregter Strömungsphänomene und das dafür notwendige Auswerteverfahren werden in dieser Arbeit beschrieben. Zur Untersuchung von solchen Mechanismen, die in Turbo-Verdichtern als Rotierende Instabilitäten in Erscheinung treten, wird auf Datensätze zurückgegriffen, die anhand experimenteller Untersuchungen an einem ringförmigen Verdichter-Leitrad gewonnen wurden. Die Rotierenden Instabilitäten sind zeitabhängige Strömungsphänomene, die bei hohen aerodynamischen Belastungen in Verdichtergittern auftreten können. Aufgrund der fehlenden Phaseninformation kann diese instationäre Strömung mit konventionellen PIV-Systemen nicht erfasst werden. Die Kármánsche Wirbelstraße und Rotierende Instabilitäten stellen beide selbsterregte Strömungsvorgänge dar. Die Ähnlichkeit wird genutzt um die Funktionalität des Verfahrens anhand der Kármánschen Wirbelstraße nachzuweisen. Der mittels PIV zu visualisierende Wirbeltransport erfordert ein besonderes Verfahren, da ein externes Signal zur Festlegung des Phasenwinkels dieser selbsterregten Strömung nicht zur Verfügung steht. Die Methodik basiert auf der Kopplung der PIV-Technik mit der Hitzdrahtanemometrie. Die gleichzeitige Messung mittels einer zeitlich hochaufgelösten Hitzdraht-Messung ermöglicht den Zeitpunkten der PIV-Bilder einen Phasenwinkel zuzuordnen. Hierzu wird das Hitzdrahtsignal mit einem FFT-Verfahren analysiert, um die PIV-Bilder entsprechend ihrer Phasenwinkel zu gruppieren. Dafür werden die aufgenommenen Bilder auf der Zeitachse der Hitzdrahtmessungen markiert. Eine systematische Analyse des Hitzdrahtsignals in der Umgebung der PIV-Messung liefert Daten zur Festlegung der Grundfrequenz und erlaubt es, der markierten PIV-Position einen Phasenwinkel zuzuordnen. Die sich aus den PIV-Bildern einer Klasse ergebenden Geschwindigkeitskomponenten werden anschließend gemittelt. Aus den resultierenden Bildern jeder Klasse ergibt sich das zweidimensionale zeitabhängige Geschwindigkeitsfeld, in dem die Wirbelwanderung der Kármánschen Wirbelstraße ersichtlich wird. In hierauf aufbauenden Untersuchungen werden Zeitsignale aus Messungen in einem Verdichterringgitter analysiert. Dabei zeigt sich, dass zusätzlich Filterfunktionen erforderlich sind. Im Ergebnis wird schließlich deutlich, dass die Übertragung der anhand der Kármánschen Wirbelstraße entwickelten Methode nur teilweise gelingt und weitere Forschungsarbeiten erforderlich sind.
Resumo:
ZUSAMMENFASSUNG: Proteinkinasen übernehmen zentrale Aufgaben in der Signaltransduktion höherer Zellen. Dabei ist die cAMP-abhängige Proteinkinase (PKA) bezüglich ihrer Struktur und Funktion eine der am besten charakterisierten Proteinkinasen. Trotzdem ist wenig über direkte Interaktionspartner der katalytischen Untereinheiten (PKA-C) bekannt. In einem Split-Ubiquitin basiertem Yeast Two Hybrid- (Y2H-)System wurden potenzielle Interaktionspartner der PKA-C identifiziert. Als Bait wurden sowohl die humane Hauptisoform Cα (hCα) als auch die Proteinkinase X (PrKX) eingesetzt. Nach der Bestätigung der Funktionalität der PKA-C-Baitproteine, dem Nachweis der Expression und der Interaktion mit dem bekannten Interaktionspartner PKI wurde ein Y2H-Screen gegen eine Mausembryo-cDNA-Expressionsbibliothek durchgeführt. Von 2*10^6 Klonen wurden 76 Kolonien isoliert, die ein mit PrKX interagierendes Preyprotein exprimierten. Über die Sequenzierung der enthaltenen Prey-Vektoren wurden 25 unterschiedliche, potenzielle Interaktionspartner identifiziert. Für hCα wurden über 2*10^6 S. cerevisiae-Kolonien untersucht, von denen 1.959 positiv waren (1.663 unter erhöhter Stringenz). Über die Sequenzierung von ca. 10% der Klone (168) konnten Sequenzen für 67 verschiedene, potenzielle Interaktionspartner der hCα identifiziert werden. 15 der Preyproteine wurden in beiden Screens identifiziert. Die PKA-C-spezifische Wechselwirkung der insgesamt 77 Preyproteine wurde im Bait Dependency Test gegen largeT, ein Protein ohne Bezug zum PKA-System, untersucht. Aus den PKA-C-spezifischen Bindern wurden die löslichen Preyproteine AMY-1, Bax72-192, Fabp3, Gng11, MiF, Nm23-M1, Nm23-M2, Sssca1 und VASP256-375 für die weitere in vitro-Validierung ausgewählt. Die Interaktion von FLAG-Strep-Strep-hCα (FSS-hCα) mit den über Strep-Tactin aus der rekombinanten Expression in E. coli gereinigten One-STrEP-HA-Proteinen (SSHA-Proteine) wurde über Koimmunpräzipitation für SSHA-Fabp3, -Nm23-M1, -Nm23-M2, -Sssca1 und -VASP256-375 bestätigt. In SPR-Untersuchungen, für die hCα kovalent an die Oberfläche eines CM5-Sensorchips gekoppelt wurde, wurden die ATP/Mg2+-Abhängigkeit der Bindungen sowie differentielle Effekte der ATP-kompetitiven Inhibitoren H89 und HA-1077 untersucht. Freie hCα, die vor der Injektion zu den SSHA-Proteinen gegeben wurde, kompetierte im Gegensatz zu FSS-PrKX die Bindung an die hCα-Oberfläche. Erste kinetische Analysen lieferten Gleichgewichtsdissoziationskonstanten im µM- (SSHA-Fabp3, -Sssca1), nM- (SSHA-Nm23-M1, –M2) bzw. pM- (SSHA-VASP256-375) Bereich. In funktionellen Analysen konnte eine Phosphorylierung von SSHA-Sssca1 und VASP256-375 durch hCα und FSS-PrKX im Autoradiogramm nachgewiesen werden. SSHA-VASP256-375 zeigte zudem eine starke Inhibition von hCα im Mobility Shift-Assay. Dieser inhibitorische Effekt sowie die hohe Affinität konnten jedoch auf eine Kombination aus der Linkersequenz des Vektors und dem N-Terminus von VASP256-375 zurückgeführt werden. Über die Wechselwirkungen der hier identifizierten Interaktionspartner Fabp3, Nm23-M1 und Nm23-M2 mit hCα können in Folgeuntersuchungen neue PKA-Funktionen insbesondere im Herzen sowie während der Zellmigration aufgedeckt werden. Sssca1 stellt dagegen ein neues, näher zu charakterisierendes PKA-Substrat dar.
Resumo:
Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 8th International Fujaba Days aimed at bringing together Fujaba develop- ers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team.
Resumo:
Optische Spektroskopie ist eine sehr wichtige Messtechnik mit einem hohen Potential für zahlreiche Anwendungen in der Industrie und Wissenschaft. Kostengünstige und miniaturisierte Spektrometer z.B. werden besonders für moderne Sensorsysteme “smart personal environments” benötigt, die vor allem in der Energietechnik, Messtechnik, Sicherheitstechnik (safety and security), IT und Medizintechnik verwendet werden. Unter allen miniaturisierten Spektrometern ist eines der attraktivsten Miniaturisierungsverfahren das Fabry Pérot Filter. Bei diesem Verfahren kann die Kombination von einem Fabry Pérot (FP) Filterarray und einem Detektorarray als Mikrospektrometer funktionieren. Jeder Detektor entspricht einem einzelnen Filter, um ein sehr schmales Band von Wellenlängen, die durch das Filter durchgelassen werden, zu detektieren. Ein Array von FP-Filter wird eingesetzt, bei dem jeder Filter eine unterschiedliche spektrale Filterlinie auswählt. Die spektrale Position jedes Bandes der Wellenlänge wird durch die einzelnen Kavitätshöhe des Filters definiert. Die Arrays wurden mit Filtergrößen, die nur durch die Array-Dimension der einzelnen Detektoren begrenzt werden, entwickelt. Allerdings erfordern die bestehenden Fabry Pérot Filter-Mikrospektrometer komplizierte Fertigungsschritte für die Strukturierung der 3D-Filter-Kavitäten mit unterschiedlichen Höhen, die nicht kosteneffizient für eine industrielle Fertigung sind. Um die Kosten bei Aufrechterhaltung der herausragenden Vorteile der FP-Filter-Struktur zu reduzieren, wird eine neue Methode zur Herstellung der miniaturisierten FP-Filtern mittels NanoImprint Technologie entwickelt und präsentiert. In diesem Fall werden die mehreren Kavitäten-Herstellungsschritte durch einen einzigen Schritt ersetzt, die hohe vertikale Auflösung der 3D NanoImprint Technologie verwendet. Seit dem die NanoImprint Technologie verwendet wird, wird das auf FP Filters basierende miniaturisierte Spectrometer nanospectrometer genannt. Ein statischer Nano-Spektrometer besteht aus einem statischen FP-Filterarray auf einem Detektorarray (siehe Abb. 1). Jeder FP-Filter im Array besteht aus dem unteren Distributed Bragg Reflector (DBR), einer Resonanz-Kavität und einen oberen DBR. Der obere und untere DBR sind identisch und bestehen aus periodisch abwechselnden dünnen dielektrischen Schichten von Materialien mit hohem und niedrigem Brechungsindex. Die optischen Schichten jeder dielektrischen Dünnfilmschicht, die in dem DBR enthalten sind, entsprechen einen Viertel der Design-Wellenlänge. Jeder FP-Filter wird einer definierten Fläche des Detektorarrays zugeordnet. Dieser Bereich kann aus einzelnen Detektorelementen oder deren Gruppen enthalten. Daher werden die Seitenkanal-Geometrien der Kavität aufgebaut, die dem Detektor entsprechen. Die seitlichen und vertikalen Dimensionen der Kavität werden genau durch 3D NanoImprint Technologie aufgebaut. Die Kavitäten haben Unterschiede von wenigem Nanometer in der vertikalen Richtung. Die Präzision der Kavität in der vertikalen Richtung ist ein wichtiger Faktor, der die Genauigkeit der spektralen Position und Durchlässigkeit des Filters Transmissionslinie beeinflusst.
Resumo:
Seit der Entdeckung der Methyltransferase 2 als hoch konserviertes und weit verbreitetes Enzym sind zahlreiche Versuche zur vollständigen Charakterisierung erfolgt. Dabei ist die biologische Funktion des Proteins ein permanent umstrittener Punkt. In dieser Arbeit wird dnmA als sensitiver Oszillator bezüglich des Zellzyklus und weiterer Einflüsse gezeigt. Insgesamt liegt der Hauptfokus auf der Untersuchung der in vivo Charakterisierung des Gens, der endogenen subzellulären Verteilung, sowie der physiologischen Aufgaben des Proteins in vivo in D. discoideum. Um Hinweise auf Signalwege in vivo zu erhalten, in denen DnmA beteiligt ist, war es zunächst notwendig, eine detaillierte Analyse des Gens anzufertigen. Mit molekularbiologisch äußerst sensitiven Methoden, wie beispielsweise Chromatin‐IP oder qRT‐PCR, konnte ein vollständiges Expressionsprofil über den Zell‐ und Lebenszyklus von D. discoideum angelegt werden. Besonders interessant sind dabei die Ergebnisse eines ursprünglichen Wildtypstammes (NC4), dessen dnmA‐Expressionsprofil quantitativ von anderen Wildtypstämmen abweicht. Auch auf Proteinebene konnten Zellzyklus‐abhängige Effekte von DnmA bestimmt werden. Durch mikroskopische Untersuchungen von verschiedenen DnmA‐GFP‐Stämmen wurden Lokalisationsänderungen während der Mitose gezeigt. Weiterhin wurde ein DnmA‐GFP‐Konstrukt unter der Kontrolle des endogenen Promotors generiert, wodurch das Protein in der Entwicklung eindeutig als Zelltypus spezifisches Protein, nämlich als Präsporen‐ bzw. Sporenspezifisches Protein, identifiziert werden konnte. Für die in vivo Analyse der katalytischen Aktivität des Enzyms konnten nun die Erkenntnisse aus der Charakterisierung des Gens bzw. Proteins berücksichtigt werden, um in vivo Substratkandidaten zu testen. Es zeigte sich, dass von allen bisherigen Substrat Kandidaten lediglich die tRNA^Asp als in vivo Substrat bestätigt werden konnte. Als besondere Erkenntnis konnte hierbei ein quantitativer Unterschied des Methylierungslevels zwischen verschiedenen Wildtypstämmen detektiert werden. Weiterhin wurde die Methylierung sowie Bindung an einen DNA‐Substratkandidaten ermittelt. Es konnte gezeigt werden, dass DnmA äußerst sequenzspezifisch mit Abschnitten des Retrotransposons DIRS‐1 in vivo eine Bindung eingeht. Auch für den Substrakandidaten snRNA‐U2 konnte eine stabile in vitro Komplexbildung zwischen U2 und hDnmt2 gezeigt werden. Insgesamt erfolgte auf Basis der ermittelten Expressionsdaten eine erneute Charakterisierung der Aktivität des Enzyms und der Substrate in vivo und in vitro.
Resumo:
The possibility to develop automatically running models which can capture some of the most important factors driving the urban climate would be very useful for many planning aspects. With the help of these modulated climate data, the creation of the typically used “Urban Climate Maps” (UCM) will be accelerated and facilitated. This work describes the development of a special ArcGIS software extension, along with two support databases to achieve this functionality. At the present time, lacking comparability between different UCMs and imprecise planning advices going along with the significant technical problems of manually creating conventional maps are central issues. Also inflexibility and static behaviour are reducing the maps’ practicality. From experi-ence, planning processes are formed more productively, namely to implant new planning parameters directly via the existing work surface to map the impact of the data change immediately, if pos-sible. In addition to the direct climate figures, information of other planning areas (like regional characteristics / developments etc.) have to be taken into account to create the UCM as well. Taking all these requirements into consideration, an automated calculation process of urban climate impact parameters will serve to increase the creation of homogenous UCMs efficiently.
Resumo:
Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.
Resumo:
A better understanding of effects after digestate application on plant community, soil microbial community as well as nutrient and carbon dynamics is crucial for a sustainable grassland management and the prevention of species and functional diversity loss. The specific research objectives of the thesis were: (i) to investigate effects after digestate application on grass species and soil microbial community, especially focussing on nitrogen dynamic in the plant-soil system and to examine the suitability of the digestate from the “integrated generation of solid fuel and biogas from biomass” (IFBB) system as fertilizer (Chapter 3). (ii) to investigate the relationship between plant community and functionality of soil microbial community of extensively managed meadows, taking into account temporal variations during the vegetation period and abiotic soil conditions (Chapter 4). (iii) to investigate the suitability of IFBB-concept implementation as grassland conservation measure for meadows and possible associated effects of IFBB digestate application on plant and soil microbial community as well as soil microbial substrate utilization and catabolic evenness (Chapter 5). Taken together the results indicate that the digestate generated during the IFBB process stands out from digestates of conventional whole crop digestion on the basis of higher nitrogen use efficiency and that it is useful for increasing harvestable biomass and the nitrogen content of the biomass, especially of L. perenne, which is a common species of intensively used grasslands. Further, a medium application rate of IFBB digestate (50% of nitrogen removed with harvested biomass, corresponding to 30 50 kg N ha-1 a-1) may be a possibility for conservation management of different meadows without changing the functional above- and belowground characteristic of the grasslands, thereby offering an ecologically worthwhile alternative to mulching. Overall, the soil microbial biomass and catabolic performance under planted soil was marginally affected by digestate application but rather by soil properties and partly by grassland species and legume occurrence. The investigated extensively managed meadows revealed a high soil catabolic evenness, which was resilient to medium IFBB application rate after a three-year period of application.
Resumo:
Biological systems exhibit rich and complex behavior through the orchestrated interplay of a large array of components. It is hypothesized that separable subsystems with some degree of functional autonomy exist; deciphering their independent behavior and functionality would greatly facilitate understanding the system as a whole. Discovering and analyzing such subsystems are hence pivotal problems in the quest to gain a quantitative understanding of complex biological systems. In this work, using approaches from machine learning, physics and graph theory, methods for the identification and analysis of such subsystems were developed. A novel methodology, based on a recent machine learning algorithm known as non-negative matrix factorization (NMF), was developed to discover such subsystems in a set of large-scale gene expression data. This set of subsystems was then used to predict functional relationships between genes, and this approach was shown to score significantly higher than conventional methods when benchmarking them against existing databases. Moreover, a mathematical treatment was developed to treat simple network subsystems based only on their topology (independent of particular parameter values). Application to a problem of experimental interest demonstrated the need for extentions to the conventional model to fully explain the experimental data. Finally, the notion of a subsystem was evaluated from a topological perspective. A number of different protein networks were examined to analyze their topological properties with respect to separability, seeking to find separable subsystems. These networks were shown to exhibit separability in a nonintuitive fashion, while the separable subsystems were of strong biological significance. It was demonstrated that the separability property found was not due to incomplete or biased data, but is likely to reflect biological structure.
Resumo:
As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together. The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge. This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer. A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.
Resumo:
Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed
Resumo:
”compositions” is a new R-package for the analysis of compositional and positive data. It contains four classes corresponding to the four different types of compositional and positive geometry (including the Aitchison geometry). It provides means for computation, plotting and high-level multivariate statistical analysis in all four geometries. These geometries are treated in an fully analogous way, based on the principle of working in coordinates, and the object-oriented programming paradigm of R. In this way, called functions automatically select the most appropriate type of analysis as a function of the geometry. The graphical capabilities include ternary diagrams and tetrahedrons, various compositional plots (boxplots, barplots, piecharts) and extensive graphical tools for principal components. Afterwards, ortion and proportion lines, straight lines and ellipses in all geometries can be added to plots. The package is accompanied by a hands-on-introduction, documentation for every function, demos of the graphical capabilities and plenty of usage examples. It allows direct and parallel computation in all four vector spaces and provides the beginner with a copy-and-paste style of data analysis, while letting advanced users keep the functionality and customizability they demand of R, as well as all necessary tools to add own analysis routines. A complete example is included in the appendix
Resumo:
The R-package “compositions”is a tool for advanced compositional analysis. Its basic functionality has seen some conceptual improvement, containing now some facilities to work with and represent ilr bases built from balances, and an elaborated subsys- tem for dealing with several kinds of irregular data: (rounded or structural) zeroes, incomplete observations and outliers. The general approach to these irregularities is based on subcompositions: for an irregular datum, one can distinguish a “regular” sub- composition (where all parts are actually observed and the datum behaves typically) and a “problematic” subcomposition (with those unobserved, zero or rounded parts, or else where the datum shows an erratic or atypical behaviour). Systematic classification schemes are proposed for both outliers and missing values (including zeros) focusing on the nature of irregularities in the datum subcomposition(s). To compute statistics with values missing at random and structural zeros, a projection approach is implemented: a given datum contributes to the estimation of the desired parameters only on the subcompositon where it was observed. For data sets with values below the detection limit, two different approaches are provided: the well-known imputation technique, and also the projection approach. To compute statistics in the presence of outliers, robust statistics are adapted to the characteristics of compositional data, based on the minimum covariance determinant approach. The outlier classification is based on four different models of outlier occur- rence and Monte-Carlo-based tests for their characterization. Furthermore the package provides special plots helping to understand the nature of outliers in the dataset. Keywords: coda-dendrogram, lost values, MAR, missing data, MCD estimator, robustness, rounded zeros