836 resultados para Multi rate processing
Resumo:
Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses, and on the applications of advanced pulse waveforms in all-optical signal processing. Among other topics, we will discuss ultrahigh repetition-rate pulse sources, the generation of parabolic-shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © 2012 IEEE.
Resumo:
This paper examines recent progress in the use of semiconductor optical amplifiers for phase sensitive signal processing functions, a discussion of the world's first multi-wavelength regenerative wavelength conversion using semiconductor optical amplifiers for BPSK signals. OFC/NFOEC Technical Digest © 2013 OSA.
Resumo:
After a proliferation of logistics e-Marketplaces during the dot.com boom of 1998-2000, there has been a high rate of failure and survivals are developing much more slowly than expected. This is the case in the aviation industry where a large number of B2B e-Marketplaces emerged according to the focus of aviation companies’ strategies on electronic B2B in the late 1990s. However, the current use of e-Marketplaces in the industry is low and many of them have ceased trading. The traditional e-Marketplaces model has been characterised by poor quality portals and a lack of technical standards. Such an approach is unsustainable in today’s competitive scenario. Improvements in website quality attributes may strongly contribute to the simplification of website functionality by users and speed up communication with all supply chain partners. In this context, it appears critical to develop models for the evaluation of e-Marketplace web sites. This chapter, after a discussion about the development of e-Marketplaces in the transport and logistics service industry and its application in the aviation industry, proposes a multi-criteria model for assessing different types of aeronautic B2B e-Marketplaces.
Resumo:
A new generation of surface plasmonic optical fibre sensors is fabricated using multiple coatings deposited on a lapped section of a single mode fibre. Post-deposition UV laser irradiation using a phase mask produces a nano-scaled surface relief grating structure, resembling nano-wires. The overall length of the individual corrugations is approximately 14 μm with an average full width half maximum of 100 nm. Evidence is presented to show that these surface structures result from material compaction created by the silicon dioxide and germanium layers in the multi-layered coating and the surface topology is capable of supporting localised surface plasmons. The coating compaction induces a strain gradient into the D-shaped optical fibre that generates an asymmetric periodic refractive index profile which enhances the coupling of the light from the core of the fibre to plasmons on the surface of the coating. Experimental data are presented that show changes in spectral characteristics after UV processing and that the performance of the sensors increases from that of their pre-UV irradiation state. The enhanced performance is illustrated with regards to change in external refractive index and demonstrates high spectral sensitivities in gaseous and aqueous index regimes ranging up to 4000 nm/RIU for wavelength and 800 dB/RIU for intensity. The devices generate surface plasmons over a very large wavelength range, (visible to 2 μm) depending on the polarization state of the illuminating light. © 2013 SPIE.
Resumo:
Novel surface plasmonic optical fiber sensors have been fabricated using multiple coatings deposited on a lapped section of a single mode fiber. UV laser irradiation processing with a phase mask produces a nano-scaled surface relief grating structure resembling nano-wires. The resulting individual corrugations produced by material compaction are approximately 20 μm long with an average width at half maximum of 100 nm and generate localized surface plasmons. Experimental data are presented that show changes in the spectral characteristics after UV processing, coupled with an overall increase in the sensitivity of the devices to surrounding refractive index. Evidence is presented that there is an optimum UV dosage (48 joules) over which no significant additional optical change is observed. The devices are characterized with regards to change in refractive index, where significantly high spectral sensitivities in the aqueous index regime are found, ranging up to 4000 nm/RIU for wavelength and 800 dB/RIU for intensity. © 2013 Optical Society of America.
Resumo:
Ethylene-propylene rubber (EPR) functionalised with glycidyl methacrylate (GMA) (f-EPR) during melt processing in the presence of a co-monomer, such as trimethylolpropane triacrylate (Tris), was used to promote compatibilisation in blends of polyethylene terephthalate (PET) and f-EPR, and their characteristics were compared with those of PET/f-EPR reactive blends in which the f-EPR was functionalised with GMA via a conventional free radical melt reaction (in the absence of a co-monomer). Binary blends of PETand f-EPR (with two types of f-EPR prepared either in presence or absence of the co-monomer) with various compositions (80/20, 60/40 and 50/50 w/w%) were prepared in an internal mixer. The blends were evaluated by their rheology (from changes in torque during melt processing and blending reflecting melt viscosity, and their melt flow rate), morphology scanning electron microscopy (SEM), dynamic mechanical properties (DMA), Fourier transform infrared (FTIR) analysis, and solubility (Molau) test. The reactive blends (PET/f-EPR) showed a marked increase in their melt viscosities in comparison with the corresponding physical (PET/EPR) blends (higher torque during melt blending), the extent of which depended on the amount of homopolymerised GMA (poly-GMA) present and the level of GMA grafting in the f-EPR. This increase was accounted for by, most probably, the occurrence of a reaction between the epoxy groups of GMA and the hydroxyl/carboxyl end groups of PET. Morphological examination by SEM showed a large improvement of phase dispersion, indicating reduced interfacial tension and compatibilisation, in both reactive blends, but with the Tris-GMA-based blends showing an even finer morphology (these blends are characterised by absence of poly-GMA and presence of higher level of grafted GMA in its f-EPR component by comparison to the conventional GMA-based blends). Examination of the DMA for the reactive blends at different compositions showed that in both cases there was a smaller separation between the glass transition temperatures compared to their position in the corresponding physical blends, which pointed to some interaction or chemical reaction between f-EPR and PET. The DMA results also showed that the shifts in the Tgs of the Tris-GMA-based blends were slightly higher than for the conventional GMA-blends. However, the overall tendency of the Tgs to approach each other in each case was found not to be significantly different (e.g. in a 60/40 ratio the former blend shifted by up to 4.5 °C in each direction whereas in the latter blend the shifts were about 3 °C). These results would suggest that in these blends the SEM and DMA analyses are probing uncorrelatable morphological details. The evidence for the formation of in situ graft copolymer between the f-EPR and PET during reactive blending was clearly illustrated from analysis by FTIR of the separated phases from the Tris-GMA-based reactive blends, and the positive Molau test pointed out to graft copolymerisation in the interface. A mechanism for the formation of the interfacial reaction during the reactive blending process is proposed.
Resumo:
In this paper, we demonstrate the possibility of reaching a quasi-stable nonlinear transmission regime with carrier pulses of 12.5 ps width in multi-channel 40 Gbit/s systems. The quasi-stable pulses that are presented in this work for the first time are not dispersion-managed solitons, and are indeed supported by a large normal span average dispersion and misbalanced optical amplification, and representing a new type of nonlinear carrier.
Resumo:
Error and uncertainty in remotely sensed data come from several sources, and can be increased or mitigated by the processing to which that data is subjected (e.g. resampling, atmospheric correction). Historically the effects of such uncertainty have only been considered overall and evaluated in a confusion matrix which becomes high-level meta-data, and so is commonly ignored. However, some of the sources of uncertainty can be explicity identified and modelled, and their effects (which often vary across space and time) visualized. Others can be considered overall, but their spatial effects can still be visualized. This process of visualization is of particular value for users who need to assess the importance of data uncertainty for their own practical applications. This paper describes a Java-based toolkit, which uses interactive and linked views to enable visualization of data uncertainty by a variety of means. This allows users to consider error and uncertainty as integral elements of image data, to be viewed and explored, rather than as labels or indices attached to the data. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Bio-impedance analysis (BIA) provides a rapid, non-invasive technique for body composition estimation. BIA offers a convenient alternative to standard techniques such as MRI, CT scan or DEXA scan for selected types of body composition analysis. The accuracy of BIA is limited because it is an indirect method of composition analysis. It relies on linear relationships between measured impedance and morphological parameters such as height and weight to derive estimates. To overcome these underlying limitations of BIA, a multi-frequency segmental bio-impedance device was constructed through a series of iterative enhancements and improvements of existing BIA instrumentation. Key features of the design included an easy to construct current-source and compact PCB design. The final device was trialled with 22 human volunteers and measured impedance was compared against body composition estimates obtained by DEXA scan. This enabled the development of newer techniques to make BIA predictions. To add a ‘visual aspect’ to BIA, volunteers were scanned in 3D using an inexpensive scattered light gadget (Xbox Kinect controller) and 3D volumes of their limbs were compared with BIA measurements to further improve BIA predictions. A three-stage digital filtering scheme was also implemented to enable extraction of heart-rate data from recorded bio-electrical signals. Additionally modifications have been introduced to measure change in bio-impedance with motion, this could be adapted to further improve accuracy and veracity for limb composition analysis. The findings in this thesis aim to give new direction to the prediction of body composition using BIA. The design development and refinement applied to BIA in this research programme suggest new opportunities to enhance the accuracy and clinical utility of BIA for the prediction of body composition analysis. In particular, the use of bio-impedance to predict limb volumes which would provide an additional metric for body composition measurement and help distinguish between fat and muscle content.
Resumo:
Dual-polarization multi-band orthogonal frequency-division multiplexing (DP-MB-OFDM) signals are optimized at data rates up to 500 Gb/s in dispersion compensation fiber (DCF)-free long-haul links. We compare different polarization crosstalk compensation techniques and explore the DP-MB-OFDM transmission distance limitations. In addition, the impact of the OFDM subcarrier number and high-level modulation format on the transmission performance are also investigated. Finally, a comparison is made between DP-MB-OFDM and the industrial solution of single-carrier DP quaternary phase-shift keying (DP-QPSK) at 1000 km.
Resumo:
Background - The main processing pathway for MHC class I ligands involves degradation of proteins by the proteasome, followed by transport of products by the transporter associated with antigen processing (TAP) to the endoplasmic reticulum (ER), where peptides are bound by MHC class I molecules, and then presented on the cell surface by MHCs. The whole process is modeled here using an integrated approach, which we call EpiJen. EpiJen is based on quantitative matrices, derived by the additive method, and applied successively to select epitopes. EpiJen is available free online. Results - To identify epitopes, a source protein is passed through four steps: proteasome cleavage, TAP transport, MHC binding and epitope selection. At each stage, different proportions of non-epitopes are eliminated. The final set of peptides represents no more than 5% of the whole protein sequence and will contain 85% of the true epitopes, as indicated by external validation. Compared to other integrated methods (NetCTL, WAPP and SMM), EpiJen performs best, predicting 61 of the 99 HIV epitopes used in this study. Conclusion - EpiJen is a reliable multi-step algorithm for T cell epitope prediction, which belongs to the next generation of in silico T cell epitope identification methods. These methods aim to reduce subsequent experimental work by improving the success rate of epitope prediction.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels -but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
* The following text has been originally published in the Proceedings of the Language Recourses and Evaluation Conference held in Lisbon, Portugal, 2004, under the title of "Towards Intelligent Written Cultural Heritage Processing - Lexical processing". I present here a revised contribution of the aforementioned paper and I add here the latest efforts done in the Center for Computational Linguistic in Prague in the field under discussion.