931 resultados para Router ottico, Click, Reti ottiche, linux
Resumo:
Reflective modulators based on the combination of an electroabsorption modulator (EAM) and semiconductor optical amplifier (SOA) are attractive devices for applications in long reach carrier distributed passive optical networks (PONs) due to the gain provided by the SOA and the high speed and low chirp modulation of the EAM. Integrated R-EAM-SOAs have experimentally shown two unexpected and unintuitive characteristics which are not observed in a single pass transmission SOA: the clamping of the output power of the device around a maximum value and low patterning distortion despite the SOA being in a regime of gain saturation. In this thesis a detailed analysis is carried out using both experimental measurements and modelling in order to understand these phenomena. For the first time it is shown that both the internal loss between SOA and R-EAM and the SOA gain play an integral role in the behaviour of gain saturated R-EAM-SOAs. Internal loss and SOA gain are also optimised for use in a carrier distributed PONs in order to access both the positive effect of output power clamping, and hence upstream dynamic range reduction, combined with low patterning operation of the SOA Reflective concepts are also gaining interest for metro transport networks and short reach, high bit rate, inter-datacentre links. Moving the optical carrier generation away from the transmitter also has potential advantages for these applications as it avoids the need for cooled photonics being placed directly on hot router line-cards. A detailed analysis is carried out in this thesis on a novel colourless reflective duobinary modulator, which would enable wavelength flexibility in a power-efficient reflective metro node.
Resumo:
This book originally accompanied a 2-day course on using the LATEX typesetting system. It has been extensively revised and updated and can now be used or self-study or in the classroom. It is aimed at users of Linux, Macintosh, or Microsoft Windows but it can be used with LATEX systems on any platform, including other Unix workstations, mainframes, and even your Personal Digital Assistant (PDA).
Resumo:
The objective of this project was to prepare a range of 4-substituted 3-(2H)-furanones, and to investigate the relationship between their molecular structures and photoluminescence properties. The effects of substituents and conjugated linker unit were also investigated. After generation of the key 3(2H)-furanone heterocycle, extension of the conjugated framework at the C-4 position was achieved through Pd(0)-catalysed coupling reactions. Chapter one of the thesis comprises a review of the relavent literature and is split into three sections. These include information about the prevalence of 3-(2H)-furanones as natural products and synthetic routes to 3-(2H)-furanones in general. The synthetic routes are divided according to the synthetic precursor employed. The final section of chapter one outlines the fundamental principles and application of photoluminescence to organic compounds in general. Chapter two contains the results of the research achieved in the course of this work and a discussion of the findings. Two routes were successfully employed to generate 4-unsubstituted 3-(2H)-furanone moieties: (i) base induced cyclisation of hydroxyenones and (ii) isoxazole chemistry. A number of methods which proved ineffective in the production of furanones with the desired substitution pattern are also detailed. The majority of this study was focused on the introduction of substituents at the C-4 position of the 3-(2H)-furanone ring. This was achieved through the use of Sonogashira and Suzuki cross coupling protocols for Pd(0) catalysed C-C bond formation. The further functionalisation of some compounds was performed using transfer hydrogenation and “click chemistry” methodologies. Finally, the photophysical properties of 3-(2H)-furanones prepared in this project are discussed and the effect of substitution patterns in a complementary “push push” and “push pull” manner have also been investigated. All the experimental data and details of the synthetic methods employed, for the compounds prepared during the course of this research is contained in chapter three together with the spectroscopic and analytical properties of the compounds prepared.
Resumo:
BACKGROUND: Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. RESULTS: We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. CONCLUSIONS: permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.
Resumo:
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.
Resumo:
Nowadays multi-touch devices (MTD) can be found in all kind of contexts. In the learning context, MTD availability leads many teachers to use them in their class room, to support the use of the devices by students, or to assume that it will enhance the learning processes. Despite the raising interest for MTD, few researches studying the impact in term of performance or the suitability of the technology for the learning context exist. However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing behaviour), we notice that the use of MTD may lead to a less favourable outcome. The complexity to generate an accurate fingers gesture and the split attention it requires (multi-tasking effect) make the use of gestures to interact with a touch-sensitive screen more difficult compared to the traditional laptop use. More precisely, it is hypothesized that efficacy and efficiency decreases, as well as the available cognitive resources making the users’ task engagement more difficult. Furthermore, the presented study takes into account the moderator effect of previous experiences with MTD. Two key factors of technology adoption theories were included in the study: familiarity and self-efficacy with the technology.Sixty university students, invited to a usability lab, are asked to perform information search tasks on an online encyclopaedia. The different tasks were created in order to execute the most commonly used mouse actions (e.g. right click, left click, scrolling, zooming, key words encoding…). Two different conditions were created: (1) MTD use and (2) laptop use (with keyboard and mouse). The cognitive load, self-efficacy, familiarity and task engagement scales were adapted to the MTD context. Furthermore, the eye-tracking measurement would offer additional information about user behaviours and their cognitive load.Our study aims to clarify some important aspects towards the usage of MTD and the added value compared to a laptop in a student learning context. More precisely, the outcomes will enhance the suitability of MTD with the processes at stakes, the role of previous knowledge in the adoption process, as well as some interesting insights into the user experience with such devices.
Resumo:
Over the last decade, multi-touch devices (MTD) have spread in a range of contexts. In the learning context, MTD accessibility leads more and more teachers to use them in their classroom, assuming that it will improve the learning activities. Despite a growing interest, only few studies have focused on the impacts of MTD use in terms of performance and suitability in a learning context.However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing), we notice that the use of MTD may lead to a less favorable outcome. More precisely, tasks that require users to generate complex and/or less common gestures may increase extrinsic cognitive load and impair performance, especially for intrinsically complex tasks. It is hypothesized that task and gesture complexity will affect users’ cognitive resources and decrease task efficacy and efficiency. Because MTD are supposed to be more appealing, it is assumed that it will also impact cognitive absorption. The present study also takes into account user’s prior knowledge concerning MTD use and gestures by using experience with MTD as a moderator. Sixty university students were asked to perform information search tasks on an online encyclopedia. Tasks were set up so that users had to generate the most commonly used mouse actions (e.g. left/right click, scrolling, zooming, text encoding…). Two conditions were created: MTD use and laptop use (with mouse and keyboard) in order to make a comparison between the two devices. An eye tracking device was used to measure user’s attention and cognitive load. Our study sheds light on some important aspects towards the use of MTD and the added value compared to a laptop in a student learning context.
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
1.Understanding which environmental factors drive foraging preferences is critical for the development of effective management measures, but resource use patterns may emerge from processes that occur at different spatial and temporal scales. Direct observations of foraging are also especially challenging in marine predators, but passive acoustic techniques provide opportunities to study the behaviour of echolocating species over a range of scales. 2.We used an extensive passive acoustic data set to investigate the distribution and temporal dynamics of foraging in bottlenose dolphins using the Moray Firth (Scotland, UK). Echolocation buzzes were identified with a mixture model of detected echolocation inter-click intervals and used as a proxy of foraging activity. A robust modelling approach accounting for autocorrelation in the data was then used to evaluate which environmental factors were associated with the observed dynamics at two different spatial and temporal scales. 3.At a broad scale, foraging varied seasonally and was also affected by seabed slope and shelf-sea fronts. At a finer scale, we identified variation in seasonal use and local interactions with tidal processes. Foraging was best predicted at a daily scale, accounting for site specificity in the shape of the estimated relationships. 4.This study demonstrates how passive acoustic data can be used to understand foraging ecology in echolocating species and provides a robust analytical procedure for describing spatio-temporal patterns. Associations between foraging and environmental characteristics varied according to spatial and temporal scale, highlighting the need for a multi-scale approach. Our results indicate that dolphins respond to coarser scale temporal dynamics, but have a detailed understanding of finer-scale spatial distribution of resources.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
This paper proposes a modification to the ACI 318-02 equivalent frame method of analysis of reinforced concrete flat plate exterior panels. Two existing code methods were examined: ACI 318 and BS 8110. The derivation of the torsional stiffness of the edge strip as proposed by ACI 318 is examined and a more accurate estimate of this value is proposed, based on both theoretical analysis and experimental results. A series of 1/3-scale models of flat plate exterior panels have been tested. Unique experimental results were obtained by measuring strains in reinforcing bars at approximately 200 selected locations in the plate panel throughout the entire loading history. The measured strains were used to calculate curvature and, hence, bending moments; these were used along with moments in the columns to assess the accuracy of the equivalent frame methods. The proposed method leads to a more accurate prediction of the moments in the plate at the column front face, at the panel midspan, and in the edge column. Registered Subscribers: View the full article. This document is available as a free download to qualified members. An electronic (PDF) version is available for purchase and download. Click on the Order Now button to continue with the download.
Resumo:
A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.
Resumo:
This paper presents a new packet scheduling scheme called agent-based WFQ to control and maintain QoS parameters in virtual private networks (VPNs) within the confines of adaptive networks. Future networks are expected to be open heterogeneous environments consisting of more than one network operator. In this adaptive environment, agents act on behalf of users or third-party operators to obtain the best service for their clients and maintain those services through the modification of the scheduling scheme in routers and switches spanning the VPN. In agent-based WFQ, an agent on the router monitors the accumulated queuing delay for each service. In order to control and to keep the end-to-end delay within the bounds, the weights for services are adjusted dynamically by agents on the routers spanning the VPN. If there is an increase or decrease in queuing delay of a service, an agent on a downstream router informs the upstream routers to adjust the weights of their queues. This keeps the end-to-end delay of services within the specified bounds and offers better QoS compared to VPNs using static WFQ. This paper also describes the algorithm for agent-based WFQ, and presents simulation results. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Data identification is a key task for any Internet Service Provider (ISP) or network administrator. As port fluctuation and encryption become more common in P2P traffic wishing to avoid identification, new strategies must be developed to detect and classify such flows. This paper introduces a new method of separating P2P and standard web traffic that can be applied as part of a data mining process, based on the activity of the hosts on the network. Unlike other research, our method is aimed at classifying individual flows rather than just identifying P2P hosts or ports. Heuristics are analysed and a classification system proposed. The accuracy of the system is then tested using real network traffic from a core internet router showing over 99% accuracy in some cases. We expand on this proposed strategy to investigate its application to real-time, early classification problems. New proposals are made and the results of real-time experiments compared to those obtained in the data mining research. To the best of our knowledge this is the first research to use host based flow identification to determine a flows application within the early stages of the connection.