23 resultados para Speech-processing technologies
Resumo:
Laser material processing is being extensively used in photovoltaic applications for both the fabrication of thin film modules and the enhancement of the crystalline silicon solar cells. The two temperature model for thermal diffusion was numerically solved in this paper. Laser pulses of 1064, 532 or 248 nm with duration of 35, 26 or 10 ns were considered as the thermal source leading to the material ablation. Considering high irradiance levels (108–109 W cm−2), a total absorption of the energy during the ablation process was assumed in the model. The materials analysed in the simulation were aluminium (Al) and silver (Ag), which are commonly used as metallic electrodes in photovoltaic devices. Moreover, thermal diffusion was also simulated for crystalline silicon (c-Si). A similar trend of temperature as a function of depth and time was found for both metals and c-Si regardless of the employed wavelength. For each material, the ablation depth dependence on laser pulse parameters was determined by means of an ablation criterion. Thus, after the laser pulse, the maximum depth for which the total energy stored in the material is equal to the vaporisation enthalpy was considered as the ablation depth. For all cases, the ablation depth increased with the laser pulse fluence and did not exhibit a clear correlation with the radiation wavelength. Finally, the experimental validation of the simulation results was carried out and the ability of the model with the initial hypothesis of total energy absorption to closely fit experimental results was confirmed.
Resumo:
•Introduction •Process Experimental Setup •Experimental Procedure •Experimental Results for Al2024 - T351, Ti6Al4V and AISI 316L - Surface Roughness and Compactation - Residual stresses - Tensile Strength - Fatigue Life •Discussion and Outlook - Prospects for technological applications of LSP
Resumo:
Nowadays, processing Industry Sector is going through a series of changes, including right management and reduction of environmental affections. Any productive process which looks for sustainable management is incomplete if Cycle of Life of mineral resources sustainability is not taken into account. Raw materials for manufacturing are provided by mineral resources extraction processes, such as copper, aluminum, iron, gold, silver, silicon, titanium? Those elements are necessary for Mankind development and are obtained from the Earth through mineral extractive processes. Mineral extraction processes are operations which must take care about the environmental consequences. Extraction of huge volumes of rock for their transformation into raw materials for industry must be optimized to reduce ecological cost of the final product as l was possible. Reducing the ecological balance on a global scale has no sense to design an efficient manufacturing in secondary industry (transformation), if in first steps of the supply chain (extraction) impact exceeds the savings of resources in successive phases. Mining operations size suggests that it is an environmental aggressive activity, but precisely because of its great impact must be the first element to be considered. That idea implies that a new concept born: Reduce economical and environmental cost This work aims to make a reflection on the parameters that can be modified to reduce the energy cost of the process without an increasing in operational costs and always ensuring the same production capacity. That means minimize economic and environmental cost at same time. An efficient design of mining operation which has taken into account that idea does not implies an increasing of the operating cost. To get this objective is necessary to think in global operation view to make that all departments involved have common guidelines which make you think in the optimization of global energy costs. Sometimes a single operational cost must be increased to reduce global cost. This work makes a review through different design parameters of surface mining setting some key performance indicators (KPIs) which are estimated from an efficient point of view. Those KPIs can be included by HQE Policies as global indicators. The new concept developed is that a new criteria has to be applied in company policies: improve management, improving OPERATIONAL efficiency. That means, that is better to use current resources properly (machinery, equipment,?) than to replace them with new things but not used correctly. As a conclusion, through an efficient management of current technologies in each extractive operation an important reduction of the energy can be achieved looking at downstream in the process. That implies a lower energetic cost in the whole cycle of life in manufactured product.
Resumo:
This paper presents a methodology for adapting an advanced communication system for deaf people in a new domain. This methodology is a user-centered design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. In this paper, the new considered domain has been the dialogues in a hotel reception. With this methodology, it was possible to develop the system in a few months, obtaining very good performance: good speech recognition and translation rates (around 90%) with small processing times.
Resumo:
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Resumo:
This paper describes the text normalization module of a text to speech fully-trainable conversion system and its application to number transcription. The main target is to generate a language independent text normalization module, based on data instead of on expert rules. This paper proposes a general architecture based on statistical machine translation techniques. This proposal is composed of three main modules: a tokenizer for splitting the text input into a token graph, a phrase-based translation module for token translation, and a post-processing module for removing some tokens. This architecture has been evaluated for number transcription in several languages: English, Spanish and Romanian. Number transcription is an important aspect in the text normalization problem.
Resumo:
Geographic information technologies (GIT) are essential to many fields of research, such as the preservation and dissemination of cultural heritage buildings, a category which includes traditional underground wine cellars. This article presents a methodology based on research carried out on this type of rural heritage building. The data were acquired using the following sensors: EDM, total station, close-range photogrammetry and laser scanning, and subsequently processed with a specific software which was verified for each case, in order to obtain a satisfactory graphic representation of these underground wine cellars. Two key aspects of this work are the accuracy of the data processing and the visualization of these traditional constructions. The methodology includes an application for geovisualizing these traditional constructions on mobile devices in order to contribute to raising awareness of this unique heritage.
Resumo:
Traditional Text-To-Speech (TTS) systems have been developed using especially-designed non-expressive scripted recordings. In order to develop a new generation of expressive TTS systems in the Simple4All project, real recordings from the media should be used for training new voices with a whole new range of speaking styles. However, for processing this more spontaneous material, the new systems must be able to deal with imperfect data (multi-speaker recordings, background and foreground music and noise), filtering out low-quality audio segments and creating mono-speaker clusters. In this paper we compare several architectures for combining speaker diarization and music and noise detection which improve the precision and overall quality of the segmentation.