954 resultados para 080302 Computer System Architecture


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any automatically measurable, robust and distinctive physical characteristic or personal trait that can be used to identify an individual or verify the claimed identity of an individual, referred to as biometrics, has gained significant interest in the wake of heightened concerns about security and rapid advancements in networking, communication and mobility. Multimodal biometrics is expected to be ultra-secure and reliable, due to the presence of multiple and independent—verification clues. In this study, a multimodal biometric system utilising audio and facial signatures has been implemented and error analysis has been carried out. A total of one thousand face images and 250 sound tracks of 50 users are used for training the proposed system. To account for the attempts of the unregistered signatures data of 25 new users are tested. The short term spectral features were extracted from the sound data and Vector Quantization was done using K-means algorithm. Face images are identified based on Eigen face approach using Principal Component Analysis. The success rate of multimodal system using speech and face is higher when compared to individual unimodal recognition systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel approach to recognize Grantha, an ancient script in South India and converting it to Malayalam, a prevalent language in South India using online character recognition mechanism. The motivation behind this work owes its credit to (i) developing a mechanism to recognize Grantha script in this modern world and (ii) affirming the strong connection among Grantha and Malayalam. A framework for the recognition of Grantha script using online character recognition is designed and implemented. The features extracted from the Grantha script comprises mainly of time-domain features based on writing direction and curvature. The recognized characters are mapped to corresponding Malayalam characters. The framework was tested on a bed of medium length manuscripts containing 9-12 sample lines and printed pages of a book titled Soundarya Lahari writtenin Grantha by Sri Adi Shankara to recognize the words and sentences. The manuscript recognition rates with the system are for Grantha as 92.11%, Old Malayalam 90.82% and for new Malayalam script 89.56%. The recognition rates of pages of the printed book are for Grantha as 96.16%, Old Malayalam script 95.22% and new Malayalam script as 92.32% respectively. These results show the efficiency of the developed system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malayalam is one of the 22 scheduled languages in India with more than 130 million speakers. This paper presents a report on the development of a speaker independent, continuous transcription system for Malayalam. The system employs Hidden Markov Model (HMM) for acoustic modeling and Mel Frequency Cepstral Coefficient (MFCC) for feature extraction. It is trained with 21 male and female speakers in the age group ranging from 20 to 40 years. The system obtained a word recognition accuracy of 87.4% and a sentence recognition accuracy of 84%, when tested with a set of continuous speech data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Content Based Image Retrieval is one of the prominent areas in Computer Vision and Image Processing. Recognition of handwritten characters has been a popular area of research for many years and still remains an open problem. The proposed system uses visual image queries for retrieving similar images from database of Malayalam handwritten characters. Local Binary Pattern (LBP) descriptors of the query images are extracted and those features are compared with the features of the images in database for retrieving desired characters. This system with local binary pattern gives excellent retrieval performance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the implementation details of a child friendly, good quality, English text-to-speech (TTS) system that is phoneme-based, concatenative, easy to set up and use with little memory. Direct waveform concatenation and linear prediction coding (LPC) are used. Most existing TTS systems are unit-selection based, which use standard speech databases available in neutral adult voices.Here reduced memory is achieved by the concatenation of phonemes and by replacing phonetic wave files with their LPC coefficients. Linguistic analysis was used to reduce the algorithmic complexity instead of signal processing techniques. Sufficient degree of customization and generalization catering to the needs of the child user had been included through the provision for vocabulary and voice selection to suit the requisites of the child. Prosody had also been incorporated. This inexpensive TTS systemwas implemented inMATLAB, with the synthesis presented by means of a graphical user interface (GUI), thus making it child friendly. This can be used not only as an interesting language learning aid for the normal child but it also serves as a speech aid to the vocally disabled child. The quality of the synthesized speech was evaluated using the mean opinion score (MOS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A GIS has been designed with limited Functionalities; but with a novel approach in Aits design. The spatial data model adopted in the design of KBGIS is the unlinked vector model. Each map entity is encoded separately in vector fonn, without referencing any of its neighbouring entities. Spatial relations, in other words, are not encoded. This approach is adequate for routine analysis of geographic data represented on a planar map, and their display (Pages 105-106). Even though spatial relations are not encoded explicitly, they can be extracted through the specially designed queries. This work was undertaken as an experiment to study the feasibility of developing a GIS using a knowledge base in place of a relational database. The source of input spatial data was accurate sheet maps that were manually digitised. Each identifiable geographic primitive was represented as a distinct object, with its spatial properties and attributes defined. Composite spatial objects, made up of primitive objects, were formulated, based on production rules defining such compositions. The facts and rules were then organised into a production system, using OPS5

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die stereoskopische 3-D-Darstellung beruht auf der naturgetreuen Präsentation verschiedener Perspektiven für das rechte und linke Auge. Sie erlangt in der Medizin, der Architektur, im Design sowie bei Computerspielen und im Kino, zukünftig möglicherweise auch im Fernsehen, eine immer größere Bedeutung. 3-D-Displays dienen der zusätzlichen Wiedergabe der räumlichen Tiefe und lassen sich grob in die vier Gruppen Stereoskope und Head-mounted-Displays, Brillensysteme, autostereoskopische Displays sowie echte 3-D-Displays einteilen. Darunter besitzt der autostereoskopische Ansatz ohne Brillen, bei dem N≥2 Perspektiven genutzt werden, ein hohes Potenzial. Die beste Qualität in dieser Gruppe kann mit der Methode der Integral Photography, die sowohl horizontale als auch vertikale Parallaxe kodiert, erreicht werden. Allerdings ist das Verfahren sehr aufwendig und wird deshalb wenig genutzt. Den besten Kompromiss zwischen Leistung und Preis bieten präzise gefertigte Linsenrasterscheiben (LRS), die hinsichtlich Lichtausbeute und optischen Eigenschaften den bereits früher bekannten Barrieremasken überlegen sind. Insbesondere für die ergonomisch günstige Multiperspektiven-3-D-Darstellung wird eine hohe physikalische Monitorauflösung benötigt. Diese ist bei modernen TFT-Displays schon recht hoch. Eine weitere Verbesserung mit dem theoretischen Faktor drei erreicht man durch gezielte Ansteuerung der einzelnen, nebeneinander angeordneten Subpixel in den Farben Rot, Grün und Blau. Ermöglicht wird dies durch die um etwa eine Größenordnung geringere Farbauflösung des menschlichen visuellen Systems im Vergleich zur Helligkeitsauflösung. Somit gelingt die Implementierung einer Subpixel-Filterung, welche entsprechend den physiologischen Gegebenheiten mit dem in Luminanz und Chrominanz trennenden YUV-Farbmodell arbeitet. Weiterhin erweist sich eine Schrägstellung der Linsen im Verhältnis von 1:6 als günstig. Farbstörungen werden minimiert, und die Schärfe der Bilder wird durch eine weniger systematische Vergrößerung der technologisch unvermeidbaren Trennelemente zwischen den Subpixeln erhöht. Der Grad der Schrägstellung ist frei wählbar. In diesem Sinne ist die Filterung als adaptiv an den Neigungswinkel zu verstehen, obwohl dieser Wert für einen konkreten 3-D-Monitor eine Invariante darstellt. Die zu maximierende Zielgröße ist der Parameter Perspektiven-Pixel als Produkt aus Anzahl der Perspektiven N und der effektiven Auflösung pro Perspektive. Der Idealfall einer Verdreifachung wird praktisch nicht erreicht. Messungen mit Hilfe von Testbildern sowie Schrifterkennungstests lieferten einen Wert von knapp über 2. Dies ist trotzdem als eine signifikante Verbesserung der Qualität der 3-D-Darstellung anzusehen. In der Zukunft sind weitere Verbesserungen hinsichtlich der Zielgröße durch Nutzung neuer, feiner als TFT auflösender Technologien wie LCoS oder OLED zu erwarten. Eine Kombination mit der vorgeschlagenen Filtermethode wird natürlich weiterhin möglich und ggf. auch sinnvoll sein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ein Luft-Erdwärmetauscher (L-EWT) kommt wegen seines niedrigen Energiebedarfs und möglicher guter Aufwandszahlen als umweltfreundliche Versorgungskomponente für Gebäude in Betracht. Dabei ist besonders vorteilhaft, dass ein L-EWT die Umgebungsluft je nach Jahreszeit vorwärmen oder auch kühlen kann. Dem zufolge sind L-EWT zur Energieeinsparung nicht nur für den Wohnhausbau interessant, sondern auch dort, wo immer noch große Mengen an fossiler Energie für die Raumkühlung benötigt werden, im Büro- und Produktionsgebäudesektor. Der Einsatzbereich eines L-EWT liegt zwischen Volumenströmen von 100 m3/h und mehreren 100.000 m3/h. Aus dieser Bandbreite und den instationären Randbedingungen entstehen erhebliche Schwierigkeiten, allgemeingültige Aussagen über das zu erwartende thermische Systemverhalten aus der Vielzahl möglicher Konstruktionsvarianten zu treffen. Hauptziel dieser Arbeit ist es, auf Basis umfangreicher, mehrjähriger Messungen an einer eigens konzipierten Testanlage und eines speziell angepassten numerischen Rechenmodells, Kennzahlen zu entwickeln, die es ermöglichen, die Betriebseigenschaften eines L-EWT im Planungsalltag zu bestimmen und ein technisch, ökologisch wie ökonomisch effizientes System zu identifizieren. Es werden die Kennzahlen elewt (Aufwandszahl), QV (Netto-Volumenleistung), ME (Meterertrag), sowie die Kombination aus v (Strömungsgeschwindigkeit) und VL (Metervolumenstrom) definiert, die zu wichtigen Informationen führen, mit denen die Qualität von Systemvarianten in der Planungsphase bewertet werden können. Weiterführende Erkenntnisse über die genauere Abschätzung von Bodenkennwerten werden dargestellt. Die hygienische Situation der durch den L-EWT transportierten Luft wird für die warme Jahreszeit, aufgrund auftretender Tauwasserbildung, beschrieben. Aus diesem Grund werden alle relevanten lufthygienischen Parameter in mehreren aufwendigen Messkampagnen erfasst und auf pathogene Wirkungen überprüft. Es wird über Sensitivitätsanalysen gezeigt, welche Fehler bei Annahme falscher Randbedingungen eintreten. Weiterhin werden in dieser Arbeit wesentliche, grundsätzliche Erkenntnisse aufbereitet, die sich aus der Betriebsbeobachtung und der Auswertung der umfangreich vorliegenden Messdaten mehrerer Anlagen ergeben haben und für die praktische Umsetzung und die Betriebsführung bedeutend sind. Hinweise zu Materialeigenschaften und zur Systemwirtschaftlichkeit sind detailliert aufgeführt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic programming is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In such cases it is most likely a hardwired module of a design framework that assists the engineer to optimize specific aspects of the system to be developed. It provides its results in a fixed format through an internal interface. In this paper we show how the utility of genetic programming can be increased remarkably by isolating it as a component and integrating it into the model-driven software development process. Our genetic programming framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools which in turn posses code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how genetic programming can be combined with model-driven development. This example clearly illustrates the advantages of our approach – the generation of source code in different programming languages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die ubiquitäre Datenverarbeitung ist ein attraktives Forschungsgebiet des vergangenen und aktuellen Jahrzehnts. Es handelt von unaufdringlicher Unterstützung von Menschen in ihren alltäglichen Aufgaben durch Rechner. Diese Unterstützung wird durch die Allgegenwärtigkeit von Rechnern ermöglicht die sich spontan zu verteilten Kommunikationsnetzwerken zusammen finden, um Informationen auszutauschen und zu verarbeiten. Umgebende Intelligenz ist eine Anwendung der ubiquitären Datenverarbeitung und eine strategische Forschungsrichtung der Information Society Technology der Europäischen Union. Das Ziel der umbebenden Intelligenz ist komfortableres und sichereres Leben. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung charakterisieren sich durch Heterogenität der verwendeten Rechner. Diese reichen von Kleinstrechnern, eingebettet in Gegenstände des täglichen Gebrauchs, bis hin zu leistungsfähigen Großrechnern. Die Rechner verbinden sich spontan über kabellose Netzwerktechnologien wie wireless local area networks (WLAN), Bluetooth, oder UMTS. Die Heterogenität verkompliziert die Entwicklung und den Aufbau von verteilten Kommunikationsnetzwerken. Middleware ist eine Software Technologie um Komplexität durch Abstraktion zu einer homogenen Schicht zu reduzieren. Middleware bietet eine einheitliche Sicht auf die durch sie abstrahierten Ressourcen, Funktionalitäten, und Rechner. Verteilte Kommunikationsnetzwerke für die ubiquitäre Datenverarbeitung sind durch die spontane Verbindung von Rechnern gekennzeichnet. Klassische Middleware geht davon aus, dass Rechner dauerhaft miteinander in Kommunikationsbeziehungen stehen. Das Konzept der dienstorienterten Architektur ermöglicht die Entwicklung von Middleware die auch spontane Verbindungen zwischen Rechnern erlaubt. Die Funktionalität von Middleware ist dabei durch Dienste realisiert, die unabhängige Software-Einheiten darstellen. Das Wireless World Research Forum beschreibt Dienste die zukünftige Middleware beinhalten sollte. Diese Dienste werden von einer Ausführungsumgebung beherbergt. Jedoch gibt es noch keine Definitionen wie sich eine solche Ausführungsumgebung ausprägen und welchen Funktionsumfang sie haben muss. Diese Arbeit trägt zu Aspekten der Middleware-Entwicklung für verteilte Kommunikationsnetzwerke in der ubiquitären Datenverarbeitung bei. Der Schwerpunkt liegt auf Middleware und Grundlagentechnologien. Die Beiträge liegen als Konzepte und Ideen für die Entwicklung von Middleware vor. Sie decken die Bereiche Dienstfindung, Dienstaktualisierung, sowie Verträge zwischen Diensten ab. Sie sind in einem Rahmenwerk bereit gestellt, welches auf die Entwicklung von Middleware optimiert ist. Dieses Rahmenwerk, Framework for Applications in Mobile Environments (FAME²) genannt, beinhaltet Richtlinien, eine Definition einer Ausführungsumgebung, sowie Unterstützung für verschiedene Zugriffskontrollmechanismen um Middleware vor unerlaubter Benutzung zu schützen. Das Leistungsspektrum der Ausführungsumgebung von FAME² umfasst: • minimale Ressourcenbenutzung, um auch auf Rechnern mit wenigen Ressourcen, wie z.B. Mobiltelefone und Kleinstrechnern, nutzbar zu sein • Unterstützung für die Anpassung von Middleware durch Änderung der enthaltenen Dienste während die Middleware ausgeführt wird • eine offene Schnittstelle um praktisch jede existierende Lösung für das Finden von Diensten zu verwenden • und eine Möglichkeit der Aktualisierung von Diensten zu deren Laufzeit um damit Fehlerbereinigende, optimierende, und anpassende Wartungsarbeiten an Diensten durchführen zu können Eine begleitende Arbeit ist das Extensible Constraint Framework (ECF), welches Design by Contract (DbC) im Rahmen von FAME² nutzbar macht. DbC ist eine Technologie um Verträge zwischen Diensten zu formulieren und damit die Qualität von Software zu erhöhen. ECF erlaubt das aushandeln sowie die Optimierung von solchen Verträgen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal Concept Analysis allows to derive conceptual hierarchies from data tables. Formal Concept Analysis is applied in various domains, e.g., data analysis, information retrieval, and knowledge discovery in databases. In order to deal with increasing sizes of the data tables (and to allow more complex data structures than just binary attributes), conceputal scales habe been developed. They are considered as metadata which structure the data conceptually. But in large applications, the number of conceptual scales increases as well. Techniques are needed which support the navigation of the user also on this meta-level of conceptual scales. In this paper, we attack this problem by extending the set of scales by hierarchically ordered higher level scales and by introducing a visualization technique called nested scaling. We extend the two-level architecture of Formal Concept Analysis (the data table plus one level of conceptual scales) to many-level architecture with a cascading system of conceptual scales. The approach also allows to use representation techniques of Formal Concept Analysis for the visualization of thesauri and ontologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. In this paper we specify a formal model for folksonomies and briefly describe our own system BibSonomy, which allows for sharing both bookmarks and publication references in a kind of personal library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.