959 resultados para Church architecture - Data processing - Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tapahtumankäsittelyä pidetään yleisesti eräänä luotettavan tietojenkäsittelyn perusvaatimuksena. Tapahtumalla tarkoitetaan operaatiosarjaa, jonka suoritusta voidaan pitää yhtenä loogisena toimenpiteenä. Tapahtumien suoritukselle on asetettu neljä perussääntöä, joista käytetään lyhennettä ACID. Hajautetuissa järjestelmissä tapahtumankäsittelyn tarve kasvaa entisestään, sillä toimenpiteiden onnistumista ei voida varmistaa pelkästään paikallisten menetelmien avulla. Hajautettua tapahtumankäsittelyä on yritetty standardoida useaan otteeseen, muttayrityksistä huolimatta siitä ei ole olemassa yleisesti hyväksyttyjä ja avoimia standardeja. Lähimpänä tällaisen standardin asemaa on todennäköisesti X/Open DTP-standardiperhe ja varsinkin siihen kuuluva XA-standardi. Tässä työssä on lisäksi tutkittu, kuinka Intellitel ONE-järjestelmän valmistajariippumatonta tietokanta-arkkitehtuuria tulisi kehittää, kun tavoitteena on mahdollistaa sen avulla suoritettavien tapahtumankäsittelyä vaativien sovellusten käyttäminen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n this work we analyze the behavior of complex information in Fresnel domain taking into account the limited capability to display complex transmittance values of current liquid crystal devices, when used as holographic displays. In order to do this analysis we compute the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution (real and imaginary parts, amplitude and phase) as well as using the full complex information adjusted with a method that combines two configurations of the devices in an adding architecture. The RMS error between the amplitude of these reconstructions and the original amplitude is used to evaluate the quality of the information displayed. The results of the error analysis show different behavior for the reconstructions using the different parts of the complex distribution and using the combined method of two devices. Better reconstructions are obtained when using two devices whose configurations densely cover the complex plane when they are added. Simulated and experimental results are also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the behavior of complex information in the Fresnel domain, taking into account the limited capability to display complex values of liquid crystal devices when they are used as holographic displays. To do this analysis we study the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution. We also use the information adjusted with a method that combines two configurations of the devices in an adding architecture. The results of the error analysis show different behavior for the reconstructions when using the different methods. Simulated and experimental results are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El departament de GREP de la UdG disposa d’un sistema de tecnologia additiva, laFab@home model 1, una impressora 3D o també anomenada de prototipat ràpid. Ésuna màquina de dimensions reduïdes, versàtil i de cost reduït, que a partir d’un modelgenerat a CAD, es capaç de fabricar objectes sòlids tridimensionals mitjançant lasuperposició de capes horitzontals.Les impressores 3D neixen de la necessitat de realitzar, en un temps relativament curt,proves de geometries complexes, que executades en un procés de manofacturaciónormal pot ser llarg i d’elevat cost.Gran part de les màquines de característiques similars a la Fab@home, tenen modelspreparats fabricar / extrudir peces a partir de silicones, aliments o polímers. Ara bé, laconformació per extrusió de plàstic és a partir de polímer verge en forma de fil.El present projecte té l’objectiu de dissenyar un sistema de conformació que permetimanufacturar polímer en forma de gra i s’adapti al funcionament de la Fab@home, amb l’argot corresponent, es tracta d’adaptar la Fab@home al fused depositionmodeling (FDM)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A BASIC computer program (REMOVAL) was developed to compute in a VAXNMS environment all the calculations of the removal method for population size estimation (catch-effort method for closed populations with constant sampling effort). The program follows the maximum likelihood methodology,checks the failure conditions, applies the appropriate formula, and displays the estimates of population size and catchability, with their standard deviations and coefficients of variation, and two goodness-of-fit statistics with their significance levels. Data of removal experiments for the cyprinodontid fish Aphanius iberus in the Alt Emporda wetlands are used to exemplify the use of the program

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates performance of recent feature-based matching techniques when applied to registration of underwater images. Matching methods are tested versus different contrast enhancing pre-processing of images. As a result of the performed experiments for various dominating in images underwater artifacts and present deformation, the outperforming preprocessing, detection and description methods are proposed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El càncer de mama és una de les causes de més mortalitat entreles dones dels països desenvolupats. És tractat d'una maneramés eficient quan es fa una detecció precoç, on les tècniques d'imatge són molt importants. Una de les tècniques d'imatge més utilitzades després dels raigs-X són els ultrasons. A l'hora de fer un processat d'imatges d'ultrasò, els experts en aquest camp es troben amb una sèrie de limitacions en el moment d'utilitzar uns filtrats per les imatges, quan es fa ús de determinades eines. Una d'aquestes limitacions consisteix en la falta d'interactivitat que aquestes ens ofereixen. Per tal de solventar aquestes limitacions, s'ha desenvolupat una eina interactiva que permet explorar el mapa de paràmetres visualitzant el resultat del filtrat en temps real, d'una manera dinàmica i intuïtiva. Aquesta eina s'ha desenvolupat dins l'entorn de visualització d'imatge mèdica MeVisLab. El MeVisLab és un entorn molt potent i modular pel desenvolupament d'algorismes de processat d'imatges, visualització i mètodes d'interacció, especialment enfocats a la imatge mèdica. A més del processament bàsic d'imatges i de mòduls de visualització, inclou algorismes avançats de segmentació, registre i moltes análisis morfològiques i funcionals de les imatges.S'ha dut a terme un experiment amb quatre experts que, utilitzantl'eina desenvolupada, han escollit els paràmetres que creien adientsper al filtrat d'una sèrie d'imatges d'ultrasò. En aquest experiments'han utilitzat uns filtres que l'entorn MeVisLab ja té implementats:el Bilateral Filter, l'Anisotropic Difusion i una combinació d'un filtrede Mediana i un de Mitjana.Amb l'experiment realitzat, s'ha fet un estudi dels paràmetres capturats i s'han proposat una sèrie d'estimadors que seran favorables en la majoria dels casos per dur a terme el preprocessat d'imatges d'ultrasò

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La visualització científica estudia i defineix algorismes i estructures de dades que permeten fer comprensibles conjunts de dades a través d’imatges. En el cas de les aplicacions mèdiques les dades que cal interpretar provenen de diferents dispositius de captació i es representen en un model de vòxels. La utilitat d’aquest model de vòxels depèn de poder-lo veure des del punt de vista ideal, és a dir el que aporti més informació. D’altra banda, existeix la tècnica dels Miralls Màgics que permet veure el model de vòxels des de diferents punts de vista alhora i mostrant diferents valors de propietat a cada mirall. En aquest projecte implementarem un algorisme que permetrà determinar el punt de vista ideal per visualitzar un model de vòxels així com també els punts de vista ideals per als miralls per tal d’aconseguir el màxim d’informació possible del model de vòxels. Aquest algorisme es basa en la teoria de la informació per saber quina és la millor visualització. L’algorisme també permetrà determinar l’assignació de colors òptima per al model de vòxels

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digitalization has been predicted to change the future as a growing range of non-routine tasks will be automated, offering new kinds of business models for enterprises. Serviceoriented architecture (SOA) provides a basis for designing and implementing welldefined problems as reusable services, allowing computers to execute them. Serviceoriented design has potential to act as a mediator between IT and human resources, but enterprises struggle with their SOA adoption and lack a linkage between the benefits and costs of services. This thesis studies the phenomenon of service reuse in enterprises, proposing an ontology to link different kinds of services with their role conceptually as a part of the business model. The proposed ontology has been created on the basis of qualitative research conducted in three large enterprises. Service reuse has two roles in enterprises: it enables automated data sharing among human and IT resources, and it may provide cost savings in service development and operations. From a technical viewpoint, the ability to define a business problem as a service is one of the key enablers for achieving service reuse. The research proposes two service identification methods, first to identify prospective services in the existing documentation of the enterprise and secondly to model the services from a functional viewpoint, supporting service identification sessions with business stakeholders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets, determine the grain boundaries using the gradient image, classify the different mineral species present using the minimum and maximum intensity images and then perform measurements of grain shape and, where possible, partial crystallographic orientation using the maximum intensity and maximum position images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes research in which genetic programming is used to automatically evolve shape grammars that construct three dimensional models of possible external building architectures. A completely automated fitness function is used, which evaluates the three dimensional building models according to different geometric properties such as surface normals, height, building footprint, and more. In order to evaluate the buildings on the different criteria, a multi-objective fitness function is used. The results obtained from the automated system were successful in satisfying the multiple objective criteria as well as creating interesting and unique designs that a human-aided system might not discover. In this study of evolutionary design, the architectures created are not meant to be fully functional and structurally sound blueprints for constructing a building, but are meant to be inspirational ideas for possible architectural designs. The evolved models are applicable for today's architectural industries as well as in the video game and movie industries. Many new avenues for future work have also been discovered and highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The telemetry data processing operation intended for a given mission are pre-defined by an onboard telemetry configuration, mission trajectory and overall telemetry methodology have stabilized lately for ISRO vehicles. The given problem on telemetry data processing is reduced through hierarchical problem reduction whereby the sequencing of operations evolves as the control task and operations on data as the function task. The function task Input, Output and execution criteria are captured into tables which are examined by the control task and then schedules when the function task when the criteria is being met.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zur Senkung von Kosten werden in vielen Unternehmen Dienstleistungen, die nicht zur Kernkompetenz gehören, an externe Dienstleister ausgelagert. Dieser Prozess wird auch als Outsourcing bezeichnet. Die dadurch entstehenden Abhängigkeiten zu den externen Dienstleistern werden mit Hilfe von Service Level Agreements (SLAs) vertraglich geregelt. Die Aufgabe des Service Level Managements (SLM) ist es, die Einhaltung der vertraglich fixierten Dienstgüteparameter zu überwachen bzw. sicherzustellen. Für eine automatische Bearbeitung ist daher eine formale Spezifikation von SLAs notwendig. Da der Markt eine Vielzahl von unterschiedlichen SLM-Werkzeugen hervorgebracht hat, entstehen in der Praxis Probleme durch proprietäre SLA-Formate und fehlende Spezifikationsmethoden. Daraus resultiert eine Werkzeugabhängigkeit und eine limitierte Wiederverwendbarkeit bereits spezifizierter SLAs. In der vorliegenden Arbeit wird ein Ansatz für ein plattformunabhängiges Service Level Management entwickelt. Ziel ist eine Vereinheitlichung der Modellierung, so dass unterschiedliche Managementansätze integriert und eine Trennung zwischen Problem- und Technologiedomäne erreicht wird. Zudem wird durch die Plattformunabhängigkeit eine hohe zeitliche Stabilität erstellter Modelle erreicht. Weiteres Ziel der Arbeit ist, die Wiederverwendbarkeit modellierter SLAs zu gewährleisten und eine prozessorientierte Modellierungsmethodik bereitzustellen. Eine automatisierte Etablierung modellierter SLAs ist für eine praktische Nutzung von entscheidender Relevanz. Zur Erreichung dieser Ziele werden die Prinzipien der Model Driven Architecture (MDA) auf die Problemdomäne des Service Level Managements angewandt. Zentrale Idee der Arbeit ist die Definition von SLA-Mustern, die konfigurationsunabhängige Abstraktionen von Service Level Agreements darstellen. Diese SLA-Muster entsprechen dem Plattformunabhängigen Modell (PIM) der MDA. Durch eine geeignete Modelltransformation wird aus einem SLA-Muster eine SLA-Instanz generiert, die alle notwendigen Konfigurationsinformationen beinhaltet und bereits im Format der Zielplattform vorliegt. Eine SLA-Instanz entspricht damit dem Plattformspezifischen Modell (PSM) der MDA. Die Etablierung der SLA-Instanzen und die daraus resultierende Konfiguration des Managementsystems entspricht dem Plattformspezifischen Code (PSC) der MDA. Nach diesem Schritt ist das Managementsystem in der Lage, die im SLA vereinbarten Dienstgüteparameter eigenständig zu überwachen. Im Rahmen der Arbeit wurde eine UML-Erweiterung definiert, die eine Modellierung von SLA-Mustern mit Hilfe eines UML-Werkzeugs ermöglicht. Hierbei kann die Modellierung rein graphisch als auch unter Einbeziehung der Object Constraint Language (OCL) erfolgen. Für die praktische Realisierung des Ansatzes wurde eine Managementarchitektur entwickelt, die im Rahmen eines Prototypen realisiert wurde. Der Gesamtansatz wurde anhand einer Fallstudie evaluiert.