113 resultados para Propagation ontology
Resumo:
Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.
Resumo:
Distributed fibre sensors provide unique capabilities for monitoring large infrastructures with high resolution. Practically, all these sensors are based on some kind of backscattering interaction. A pulsed activating signal is launched on one side of the sensing fibre and the backscattered signal is read as a function of the time of flight of the pulse along the fibre. A key limitation in the measurement range of all these sensors is introduced by fibre attenuation. As the pulse travels along the fibre, the losses in the fibre cause a drop of signal contrast and consequently a growth in the measurement uncertainty. In typical single-mode fibres, attenuation imposes a range limit of less than 30km, for resolutions in the order of 1-2 meters. An interesting improvement in this performance can be considered by using distributed amplification along the fibre [1]. Distributed amplification allows having a more homogeneous signal power along the sensing fibre, which also enables reducing the signal power at the input and therefore avoiding nonlinearities. However, in long structures (≥ 50 km), plain distributed amplification does not perfectly compensate the losses and significant power variations along the fibre are to be expected, leading to inevitable limitations in the measurements. From this perspective, it is simple to understand intuitively that the best possible solution for distributed sensors would be offered by a virtually transparent fibre, i.e. a fibre exhibiting effectively zero attenuation in the spectral region of the pulse. In addition, it can be shown that lossless transmission is the working point that allows the minimization of the amplified spontaneous emission (ASE) noise build-up. © 2011 IEEE.
Resumo:
We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.
Resumo:
Software architecture plays an essential role in the high level description of a system design, where the structure and communication are emphasized. Despite its importance in the software engineering process, the lack of formal description and automated verification hinders the development of good software architecture models. In this paper, we present an approach to support the rigorous design and verification of software architecture models using the semantic web technology. We view software architecture models as ontology representations, where their structures and communication constraints are captured by the Web Ontology Language (OWL) and the Semantic Web Rule Language (SWRL). Specific configurations on the design are represented as concrete instances of the ontology, to which their structures and dynamic behaviors must conform. Furthermore, ontology reasoning tools can be applied to perform various automated verification on the design to ensure correctness, such as consistency checking, style recognition, and behavioral inference.
Resumo:
eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.
Resumo:
We investigate quantum vortex ring dynamics at scales smaller than the inter-vortex spacing in quantum turbulence. Through geometrical arguments and high-resolution numerical simulations, we examine the validity of simple estimates for the mean free path and the structure of vortex rings post-reconnection. We find that a large proportion of vortex rings remain coherent objects where approximately 75% of their energy is preserved. This leads us to consider the effectiveness of energy transport in turbulent tangles. Moreover, we show that in low density tangles, appropriate for the ultra-quantum regime, ring emission cannot be ruled out as an important mechanism for energy dissipation. However at higher vortex line densities, typically associated with the quasi-classical regime, loop emission is expected to make a negligible contribution to energy dissipation, even allowing for the fact that our work shows rings can survive multiple reconnection events. Hence the Kelvin wave cascade seems the most plausible mechanism leading to energy dissipation
Resumo:
The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t + 1, using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.
Resumo:
An important parameter in integrated optical device is the propagation loss of the waveguide. Its characterization gives the information of the fabrication quality as well as the information of other passive devices on the chip as it is the basic building block of the passive devices. Although, over the last three decades many methods have been developed, there is not a single standard present yet. This paper presents a comparative analysis of the methods existing from the past as well as methods developed very recently in order to provide a complete picture of the pros and cons of different types of methods and from this comparison the best method is suggested according to the authors opinion. To support the claim, apart from the analytical comparison, this paper also presents a comparison performed with the experimental results between the suggested best method which is recently proposed by Massachusetts Institute of Technology (MIT) researchers based on undercoupled all-pass microring structure and the popular cut-back method.