930 resultados para Spatial Query Processing And Optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Engenharia Mecânica – Especialização Gestão Industrial

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the wireless cellular market reaches competitive levels never seen before, network operators need to focus on maintaining Quality of Service (QoS) a main priority if they wish to attract new subscribers while keeping existing customers satisfied. Speech Quality as perceived by the end user is one major example of a characteristic in constant need of maintenance and improvement. It is in this topic that this Master Thesis project fits in. Making use of an intrusive method of speech quality evaluation, as a means to further study and characterize the performance of speech codecs in second-generation (2G) and third-generation (3G) technologies. Trying to find further correlation between codecs with similar bit rates, along with the exploration of certain transmission parameters which may aid in the assessment of speech quality. Due to some limitations concerning the audio analyzer equipment that was to be employed, a different system for recording the test samples was sought out. Although the new designed system is not standard, after extensive testing and optimization of the system's parameters, final results were found reliable and satisfactory. Tests include a set of high and low bit rate codecs for both 2G and 3G, where values were compared and analysed, leading to the outcome that 3G speech codecs perform better, under the approximately same conditions, when compared with 2G. Reinforcing the idea that 3G is, with no doubt, the best choice if the costumer looks for the best possible listening speech quality. Regarding the transmission parameters chosen for the experiment, the Receiver Quality (RxQual) and Received Energy per Chip to the Power Density Ratio (Ec/N0), these were subject to speech quality correlation tests. Final results of RxQual were compared to those of prior studies from different researchers and, are considered to be of important relevance. Leading to the confirmation of RxQual as a reliable indicator of speech quality. As for Ec/N0, it is not possible to state it as a speech quality indicator however, it shows clear thresholds for which the MOS values decrease significantly. The studied transmission parameters show that they can be used not only for network management purposes but, at the same time, give an expected idea to the communications engineer (or technician) of the end-to-end speech quality consequences. With the conclusion of the work new ideas for future studies come to mind. Considering that the fourth-generation (4G) cellular technologies are now beginning to take an important place in the global market, as the first all-IP network structure, it seems of great relevance that 4G speech quality should be subject of evaluation. Comparing it to 3G, not only in narrowband but also adding wideband scenarios with the most recent standard objective method of speech quality assessment, POLQA. Also, new data found on Ec/N0 tests, justifies further research studies with the intention of validating the assumptions made in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis submitted to the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia, for the degree of Doctor of Philosophy in Biochemistry

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a framework to an Industrial Engineering and Management Science course from School of Management and Industrial Studies using Autonomous Ground Vehicles (AGV) to supply materials to a production line as an experimental setup for the students to acquire knowledge in the production robotics area. The students must be capable to understand and put into good use several concepts that will be of utmost importance in their professional life such as critical decisions regarding the study, development and implementation of a production line. The main focus is a production line using AGVs, where the students are required to address several topics such as: sensors actuators, controllers and an high level management and optimization software. The presented framework brings to the robotics teaching community methodologies that allow students from different backgrounds, that normally don’t experiment with the robotics concepts in practice due to the big gap between theory and practice, to go straight to ”making” robotics. Our aim was to suppress the minimum start point level thus allowing any student to fully experience robotics with little background knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation for the Master Degree in Structural and Functional Biochemistry

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia Física

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia do Ambiente

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the importance and wide applications of the DNA analysis, there is a need to make genetic analysis more available and more affordable. As such, the aim of this PhD thesis is to optimize a colorimetric DNA biosensor based on gold nanoprobes developed in CEMOP by reducing its price and the needed volume of solution without compromising the device sensitivity and reliability, towards the point of care use. Firstly, the price of the biosensor was decreased by replacing the silicon photodetector by a low cost, solution processed TiO2 photodetector. To further reduce the photodetector price, a novel fabrication method was developed: a cost-effective inkjet printing technology that enabled to increase TiO2 surface area. Secondly, the DNA biosensor was optimized by means of microfluidics that offer advantages of miniaturization, much lower sample/reagents consumption, enhanced system performance and functionality by integrating different components. In the developed microfluidic platform, the optical path length was extended by detecting along the channel and the light was transmitted by optical fibres enabling to guide the light very close to the analysed solution. Microfluidic chip of high aspect ratio (~13), smooth and nearly vertical sidewalls was fabricated in PDMS using a SU-8 mould for patterning. The platform coupled to the gold nanoprobe assay enabled detection of Mycobacterium tuberculosis using 3 8l on DNA solution, i.e. 20 times less than in the previous state-of-the-art. Subsequently, the bio-microfluidic platform was optimized in terms of cost, electrical signal processing and sensitivity to colour variation, yielding 160% improvement of colorimetric AuNPs analysis. Planar microlenses were incorporated to converge light into the sample and then to the output fibre core increasing 6 times the signal-to-losses ratio. The optimized platform enabled detection of single nucleotide polymorphism related with obesity risk (FTO) using target DNA concentration below the limit of detection of the conventionally used microplate reader (i.e. 15 ng/μl) with 10 times lower solution volume (3 μl). The combination of the unique optical properties of gold nanoprobes with microfluidic platform resulted in sensitive and accurate sensor for single nucleotide polymorphism detection operating using small volumes of solutions and without the need for substrate functionalization or sophisticated instrumentation. Simultaneously, to enable on chip reagents mixing, a PDMS micromixer was developed and optimized for the highest efficiency, low pressure drop and short mixing length. The optimized device shows 80% of mixing efficiency at Re = 0.1 in 2.5 mm long mixer with the pressure drop of 6 Pa, satisfying requirements for the application in the microfluidic platform for DNA analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis is the investigation and optimization of the synthesis of potential fragrances. This work is projected as collaboration between the University of Applied Sciences in Merseburg and the company Miltitz Aromatics GmbH in Bitterfeld‐Wolfen (Germany). Flavoured compounds can be synthesized in different ways and by various methods. In this work, methods like the phase transfer catalysis and the Cope‐rearrangement were investigated and applied, for getting a high yield and quantity of the desired substances and without any by‐products or side reactions. This involved the study of syntheses with different process parameters such as temperature, solvent, pressure and reaction time. The main focus was on Cope‐rearrangement, which is a common method in the synthesis of new potential fragrance compounds. The substances synthesized in this work have a hepta‐1,5‐diene‐structure and that is why they can easily undergo this [3,3]‐sigma tropic rearrangement. The lead compound of all research was 2,5‐dimethyl‐2‐vinyl‐4‐hexenenitrile (Neronil). Neronil is synthesized by an alkylation of 2‐methyl‐3‐butenenitrile with prenylchloride under basic conditions in a phase‐transfer system. In this work the yield of isolated Neronil is improved from about 35% to 46% by according to the execution conditions of the reaction. Additionally the amount of side product was decreased. This synthesized hexenenitrile involved not only the aforementioned 1,5‐diene‐structure, but also a cyano group, that makes this structure a suitable base for the synthesis of new potential fragrance compounds. It was observed that Neronil can be transferred into 2,5‐dimethyl‐2‐vinyl‐4‐hexenoic acid by a hydrolysis under basic conditions. After five hours the acid can be obtained with a yield of 96%. The following esterification is realized with isobutanol to produce 2,5‐dimethyl‐2‐vinyl‐4‐hexenoic acid isobutyl ester with quantitative conversion. It was observed that the Neronil and the corresponding ester can be converted into the corresponding Cope‐product, with a conversion of 30 % and 80%. Implementing the Cope‐rearrangement, the acid was heated and an unexpected decarboxylated product is formed. To achieve the best verification of reaction development and structure, scrupulous analyses were done using GC‐MS, 1H‐NMR and 13C‐ NMR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work documents the deposition and optimization of semiconductor thin films using chemical spray coating technique (CSC) for application on thin-film transistors (TFTs), with a low-cost, simple method. CSC setup was implemented and explored for industrial application, within Holst Centre, an R&D center in the Netherlands. As zinc oxide had already been studied within the organization, it was used as a standard material in the initial experiments, obtaining typical mobility values of 0.14 cm2/(V.s) for unpatterned TFTs. Then, oxide X layer characteristics were compared for films deposited with CSC at 40°C and spin-coating. The mobility of the spin-coated TFTs was 103 cm2/(V.s) higher, presumably due to the lack of uniformity of spray-coated film at such low temperatures. Lastly, tin sulfide, a relatively unexplored material, was deposited by CSC in order to obtain functional TFTs and explore the device’s potential for working as a phototransistor. Despite the low mobilities of the devices, a sensitive photodetector was made, showing drain current variation of nearly one order of magnitude under yellow light. CSC technique’s simplicity and versatility was confirmed, as three different semiconductors were successfully implemented into functional devices.