966 resultados para Computer sound processing
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
The EM3E Master is an Education Programme supported by the European Commission, the European Membrane Society (EMS), the European Membrane House (EMH), and a large international network of industrial companies, research centers and universities
Resumo:
The present work is devoted to study the pre-treatment of lignocellulosic biomass, especially wheat straw, by the application of the acidic ionic liquid (IL) such as 1-butyl-3-methylimidazolium hydrogen sulphate. The ability of this IL to hydrolysis and conversion of biomass was scrutinised. The pre-treatment with hydrogen sulphate-based IL allowed to obtain a liquor rich in hemicellulosic sugars, furans and organic acids, and a solid fraction mainly constituted by cellulose and lignin. Quantitative and qualitative analyses of the produced liquors were made by capillary electrophoresis and high-performance liquid chromatography. Pre-treatment conditions were set to produce xylose or furfural. Specific range of temperatures from 70 to 175 °C and residence times from 20.0 to 163.3 min were studied by fixing parameters such as biomass/IL ratio (10 % (w/w)) and water content (1.25 % (w/w)) in the pre-treatment process. Statistical modelling was applied to maximise the xylose and furfural concentrations. For the purpose of reaction condition comparison the severity factor for studied ionic liquid was proposed and applied in this work. Optimum conditions for xylose production were identified to be at 125 °C and 82.1 min, at which 16.7 % (w/w) xylose yield was attained. Furfural was preferably formed at higher pre-treatment temperatures and longer reaction time (161 °C and 104.5 min) reaching 30.7 % (w/w) maximum yield. The influence of water content on the optimum xylose formation was also studied. Pre-treatments with 5 and 10 % (w/w) water content were performed and an increase of 100 % and 140 % of xylose yield was observed, respectively, while the conversion into furfural maintained unchanged.
Resumo:
This thesis introduces a novel conceptual framework to support the creation of knowledge representations based on enriched Semantic Vectors, using the classical vector space model approach extended with ontological support. One of the primary research challenges addressed here relates to the process of formalization and representation of document contents, where most existing approaches are limited and only take into account the explicit, word-based information in the document. This research explores how traditional knowledge representations can be enriched through incorporation of implicit information derived from the complex relationships (semantic associations) modelled by domain ontologies with the addition of information presented in documents. The relevant achievements pursued by this thesis are the following: (i) conceptualization of a model that enables the semantic enrichment of knowledge sources supported by domain experts; (ii) development of a method for extending the traditional vector space, using domain ontologies; (iii) development of a method to support ontology learning, based on the discovery of new ontological relations expressed in non-structured information sources; (iv) development of a process to evaluate the semantic enrichment; (v) implementation of a proof-of-concept, named SENSE (Semantic Enrichment kNowledge SourcEs), which enables to validate the ideas established under the scope of this thesis; (vi) publication of several scientific articles and the support to 4 master dissertations carried out by the department of Electrical and Computer Engineering from FCT/UNL. It is worth mentioning that the work developed under the semantic referential covered by this thesis has reused relevant achievements within the scope of research European projects, in order to address approaches which are considered scientifically sound and coherent and avoid “reinventing the wheel”.
Resumo:
With the recent advances in technology and miniaturization of devices such as GPS or IMU, Unmanned Aerial Vehicles became a feasible platform for a Remote Sensing applications. The use of UAVs compared to the conventional aerial platforms provides a set of advantages such as higher spatial resolution of the derived products. UAV - based imagery obtained by a user grade cameras introduces a set of problems which have to be solved, e. g. rotational or angular differences or unknown or insufficiently precise IO and EO camera parameters. In this work, UAV - based imagery of RGB and CIR type was processed using two different workflows based on PhotoScan and VisualSfM software solutions resulting in the DSM and orthophoto products. Feature detection and matching parameters influence on the result quality as well as a processing time was examined and the optimal parameter setup was presented. Products of the both workflows were compared in terms of a quality and a spatial accuracy. Both workflows were compared by presenting the processing times and quality of the results. Finally, the obtained products were used in order to demonstrate vegetation classification. Contribution of the IHS transformations was examined with respect to the classification accuracy.
Resumo:
Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).
Resumo:
Viral vectors are playing an increasingly important role in the vaccine and gene therapy elds. The broad spectrum of potential applications, together with expanding medical markets, drives the e orts to improve the production processes for viral vaccines and viral vectors. Developing countries, in particular, are becoming the main vaccine market. It is thus critical to decrease the cost per dose, which is only achievable by improving the production process. In particular advances in the upstream processing have substantially increased bioreactor yields, shifting the bioprocess bottlenecks towards the downstream processing. The work presented in this thesis aimed to develop new processes for adenoviruses puri cation. The use of state-of-the-art technology combined with innovative continuous processes contributed to build robust and cost-e ective strategies for puri cation of complex biopharmaceuticals.(...)
Resumo:
We are constantly immersed in stimuli. Upon reaching our senses, stimuli are processed within various brain systems along various pathways into the brain, and eventually turned into a percept. However, there are percepts that do not result from responses to external source stimuli. A particular case of this situation is the auditory percept known as tinnitus. Tinnitus can be seen as a task-irrelevant auditory percept, commonly reported to interfere with normal daily tasks. This is known from reports made by tinnitus sufferers that refer to their phantom percept as distracting, and that it diverts their focus from the task-relevant stimuli.(...)
Resumo:
In this study in the field of Consumer Behavior, brand name memory of consumers with regard to verbal and visual incongruent and congruent information such as memory structure of brands was tested. Hence, four experimental groups with different constellations of verbal and visual congruity and incongruity were created to compare their brand name memory performance. The experiment was conducted in several classes with 128 students, each group with 32 participants. It was found that brands, which are presented in a congruent or moderately incongruent relation to their brand schema, result in a better brand recall than their incongruent counterparts. A difference between visual congruity and moderately incongruity could not be confirmed. In contrast to visual incongruent information, verbal incongruent information does not result in a worse brand recall performance.
Resumo:
This dissertation analyzes the possibilities of utilizing speech-processing technologies to transform the user experience of ActivoBank’s customers while using remote banking solutions. The technologies are examined through different criteria to determine if they support the bank’s goals and strategy and whether they should be incorporated in the bank’s offering. These criteria include the alignment with ActivoBank’s values, the suitability of the technology providers, the benefits these technologies entail, potential risks, appeal to the customers and impact on customer satisfaction. The analysis suggests that ActivoBank might not be in a position to adopt these technologies at this point in time.
Resumo:
Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.
Resumo:
The vulnerability of the masonry envelop under blast loading is considered critical due to the risk of loss of lives. The behaviour of masonry infill walls subjected to dynamic out-of-plane loading was experimentally investigated in this work. Using confined underwater blast wave generators (WBWG), applying the extremely high rate conversion of the explosive detonation energy into the kinetic energy of a thick water confinement, allowed a surface area distribution avoiding also the generation of high velocity fragments and reducing atmospheric sound wave. In the present study, water plastic containers, having in its centre a detonator inside a cylindrical explosive charge, were used in unreinforced masonry infills panels with 1.7m by 3.5m. Besides the usage of pressure and displacement transducers, pictures with high-speed video cameras were recorded to enable processing of the deflections and identification of failure modes. Additional numerical studies were performed in both unreinforced and reinforced walls. Bed joint reinforcement and grid reinforcement were used to strengthen the infill walls, and the results are presented and compared, allowing to obtain pressure-impulse diagrams for design of masonry infill walls.
Resumo:
As increasingly more sophisticated materials and products are being developed and times-to-market need to be minimized, it is important to make available fast response characterization tools using small amounts of sample, capable of conveying data on the relationships between rheological response, process-induced material structure and product characteristics. For this purpose, a single / twin-screw mini-extrusion system of modular construction, with well-controlled outputs in the range 30-300 g/h, was coupled to a in- house developed rheo-optical slit die able to measure shear viscosity and normal-stress differences, as well as performing rheo-optical experiments, namely small angle light scattering (SALS) and polarized optical microscopy (POM). In addition, the mini-extruder is equipped with ports that allow sample collection, and the extrudate can be further processed into products to be tested later. Here, we present the concept and experimental set-up [1, 2]. As a typical application, we report on the characterization of the processing of a polymer blend and of the properties of extruded sheets. The morphological evolution of a PS/PMMA industrial blend along the extruder, the flow-induced structures developed and the corresponding rheological characteristics are presented, together with the mechanical and structural characteristics of produced sheets. The application of this experimental tool to other material systems will also be discussed.