927 resultados para Naming speed
Resumo:
The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.
Resumo:
This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
The recognition that early breast cancer is a spectrum of diseases each requiring a specific systemic therapy guided the 13th St Gallen International Breast Cancer Consensus Conference [1]. The meeting assembled 3600 participants from nearly 90 countries worldwide. Educational content has been centred on the primary and multidisciplinary treatment approach of early breast cancer. The meeting culminated on the final day, with the St Gallen Breast Cancer Treatment Consensus, established by 40-50 of the world's most experienced opinion leaders in the field of breast cancer treatment. The major issue that arose during the consensus conference was the increasing gap between what is theoretically feasible in patient risk stratification, in treatment, and in daily practice management. We need to find new paths to access innovations to clinical research and daily practice. To ensure that continued innovation meets the needs of patients, the therapeutic alliance between patients and academic-led research should to be extended to include relevant pharmaceutical companies and drug regulators with a unique effort to bring innovation into clinical practice. We need to bring together major players from the world of breast cancer research to map out a coordinated strategy on an international scale, to address the disease fragmentation, to share financial resources, and to integrate scientific data. The final goal will be to improve access to an affordable, best standard of care for all patients in each country.
Resumo:
Simultaneous measurements of high-altitude optical emissions and magnetic fields produced by sprite-associated lightning discharges enable a close examination of the link between low-altitude lightning processes and high-altitude sprite processes. We report results of the coordinated analysis of high-speed sprite video and wideband magnetic field measurements recorded simultaneously at Yucca Ridge Field Station and Duke University. From June to August 2005, sprites were detected following 67 lightning strokes, all of which had positive polarity. Our data showed that 46% of the 83 discrete sprite events in these sequences initiated more than 10 ms after the lightning return stroke, and we focus on these delayed sprites in this work. All delayed sprites were preceded by continuing current moments that averaged at least 11 kA km between the return stroke and sprites. The total lightning charge moment change at sprite initiation varied from 600 to 18,600 C km, and the minimum value to initiate long-delayed sprites ranged from 600 for 15 ms delay to 2000 C km for more than 120 ms delay. We numerically simulated electric fields at altitudes above these lightning discharges and found that the maximum normalized electric fields are essentially the same as fields that produce short-delayed sprites. Both estimated and simulation-predicted sprite initiation altitudes indicate that long-delayed sprites generally initiate around 5 km lower than short-delayed sprites. The simulation results also reveal that slow (5-20 ms) intensifications in continuing current can play a major role in initiating delayed sprites. Copyright 2008 by the American Geophysical Union.
Resumo:
The naming impairments in Alzheimer's disease (AD) have been attributed to a variety of cognitive processing deficits, including impairments in semantic memory, visual perception, and lexical access. To further understand the underlying biological basis of the naming failures in AD, the present investigation examined the relationship of various classes of naming errors to regional brain measures of cerebral glucose metabolism as measured with 18 F-Fluoro-2-deoxyglucose (FDG) and positron emission tomography (PET). Errors committed on a visual naming test were categorized according to a cognitive processing schema and then examined in relationship to metabolism within specific brain regions. The results revealed an association of semantic errors with glucose metabolism in the frontal and temporal regions. Language access errors, such as circumlocutions, and word blocking nonresponses were associated with decreased metabolism in areas within the left hemisphere. Visuoperceptive errors were related to right inferior parietal metabolic function. The findings suggest that specific brain areas mediate the perceptual, semantic, and lexical processing demands of visual naming and that visual naming problems in dementia are related to dysfunction in specific neural circuits.
Resumo:
The issues surrounding collision of projectiles with structures has gained a high profile since the events of 11th September 2001. In such collision problems, the projectile penetrates the stucture so that tracking the interface between one material and another becomes very complex, especially if the projectile is essentially a vessel containing a fluid, e.g. fuel load. The subsequent combustion, heat transfer and melting and re-solidification process in the structure render this a very challenging computational modelling problem. The conventional approaches to the analysis of collision processes involves a Lagrangian-Lagrangian contact driven methodology. This approach suffers from a number of disadvantages in its implementation, most of which are associated with the challenges of the contact analysis component of the calculations. This paper describes a 'two fluid' approach to high speed impact between solid structures, where the objective is to overcome the problems of penetration and re-meshing. The work has been carried out using the finite volume, unstructured mesh multi-physics code PHYSICA+, where the three dimensional fluid flow, free surface, heat transfer, combustion, melting and re-solidification algorithms are approximated using cell-centred finite volume, unstructured mesh techniques on a collocated mesh. The basic procedure is illustrated for two cases of Newtonian and non-Newtonian flow to test various of its component capabilities in the analysis of problems of industrial interest.
Resumo:
Particle degradation can be a significant issue in particulate solids handling and processing, particularly in pneumatic conveying systems, in which high-speed impact is usually the main contributory factor leading to changes in particle size distribution (comparing the material to its virgin state). However, other factors may strongly influence particles breakage as well, such as particle concentrations, bend geometry,and hardness of pipe material. Because of such complex influences, it is often very difficult to predict particle degradation accurately and rapidly for industrial processes. In this article, a general method for evaluating particle degradation due to high-speed impacts is described, in which the breakage properties of particles are quantified using what are known as "breakage matrices". Rather than a pilot-size test facility, a bench-scale degradation tester has been used. Some advantages of using the bench-scale tester are briefly explored. Experimental determination of adipic acid has been carried out for a range of impact velocities in four particle size categories. Subsequently, particle breakage matrices of adipic acid have been established for these impact velocities. The experimental results show that the "breakage matrices" of particles is an effective and easy method for evaluation of particle degradation due to high-speed impacts. The possibility of the "breakage matrices" approach being applied to a pneumatic conveying system is also explored by a simulation example.