994 resultados para Speed Limit Signs.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hill, Joe M., Lloyd, Noel G., Pearson, Jane M., 'Limit cycles of a predator-prey model with intratrophic predation', Journal of Mathematical Analysis and Applications Volume 349, Issue 2, 15 January 2009, Pages 544-555

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Woods, Timothy, The Poetics of the Limit (New York: Palgrave Macmillan, 2003) RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New, Elizabeth, 'Signs of community or marks of the exclusive? Parish and guild seals in later medieval England', In: The Parish in Late Medieval England, (Lincs: Shaun Tyas) pp.112-128, 2006 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gough, John, (2004) 'Quantum Flows as Markovian Limit of Emission, Absorption and Scattering Interactions', Communications in Mathematical Physics 254 pp.498-512 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gough, John, (2004) 'Holevo-Ordering and the Continuous-Time Limit for Open Floquet Dynamics', Letters in Mathematical Physcis 67(3) pp.207-221 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is an author-created, un-copyedited version of an article accepted for publication in Acta Physica Polonica A. The Version of Record is available online at http://przyrbwn.icm.edu.pl/APP/PDF/118/a118z2p31.pdf

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Spallation Neutron Source (SNS) facility at Oak Ridge National Laboratory (ORNL), the deposition of a high-energy proton beam into the liquid mercury target forms bubbles whose asymmetric collapse cause Cavitation Damage Erosion (CDE) to the container walls, thereby reducing its usable lifetime. One proposed solution for mitigation of this damage is to inject a population of microbubbles into the mercury, yielding a compliant and attenuative medium that will reduce the resulting cavitation damage. This potential solution presents the task of creating a diagnostic tool to monitor bubble population in the mercury flow in order to correlate void fraction and damage. Details of an acoustic waveguide for the eventual measurement of two-phase mercury-helium flow void fraction are discussed. The assembly’s waveguide is a vertically oriented stainless steel cylinder with 5.08cm ID, 1.27cm wall thickness and 40cm length. For water experiments, a 2.54cm thick stainless steel plate at the bottom supports the fluid, provides an acoustically rigid boundary condition, and is the mounting point for a hydrophone. A port near the bottom is the inlet for the fluid of interest. A spillover reservoir welded to the upper portion of the main tube allows for a flow-through design, yielding a pressure release top boundary condition for the waveguide. A cover on the reservoir supports an electrodynamic shaker that is driven by linear frequency sweeps to excite the tube. The hydrophone captures the frequency response of the waveguide. The sound speed of the flowing medium is calculated, assuming a linear dependence of axial mode number on modal frequency (plane wave). Assuming that the medium has an effective-mixture sound speed, and that it contains bubbles which are much smaller than the resonance radii at the highest frequency of interest (Wood’s limit), the void fraction of the flow is calculated. Results for water and bubbly water of varying void fraction are presented, and serve to demonstrate the accuracy and precision of the apparatus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article introduces ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are finitely many GIT quotients of

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recognition that early breast cancer is a spectrum of diseases each requiring a specific systemic therapy guided the 13th St Gallen International Breast Cancer Consensus Conference [1]. The meeting assembled 3600 participants from nearly 90 countries worldwide. Educational content has been centred on the primary and multidisciplinary treatment approach of early breast cancer. The meeting culminated on the final day, with the St Gallen Breast Cancer Treatment Consensus, established by 40-50 of the world's most experienced opinion leaders in the field of breast cancer treatment. The major issue that arose during the consensus conference was the increasing gap between what is theoretically feasible in patient risk stratification, in treatment, and in daily practice management. We need to find new paths to access innovations to clinical research and daily practice. To ensure that continued innovation meets the needs of patients, the therapeutic alliance between patients and academic-led research should to be extended to include relevant pharmaceutical companies and drug regulators with a unique effort to bring innovation into clinical practice. We need to bring together major players from the world of breast cancer research to map out a coordinated strategy on an international scale, to address the disease fragmentation, to share financial resources, and to integrate scientific data. The final goal will be to improve access to an affordable, best standard of care for all patients in each country.