933 resultados para Speed Bumps.
Resumo:
The toughness of polypropylene (PP)/ethylene-propylene-diene monomer rubber (EPDM) blends containing various EPDM contents as a function of the tensile speed was studied. The toughness of the blends was determined from the tensile fracture energy of the side-edge notched samples. A sharp brittle-tough transition was observed in the fracture energy versus interparticle distance (ID) curves when the crosshead speed < 102.4 mm/min. It was observed that the brittle-ductile transition of PP/EPDM blend occurred either by reducing ID or by decreasing the tensile speed. The correlation between the critical interparticle distance and tensile deformation rate was compared with that between the critical interparticle distance and temperature for PP/EPDM blends. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent investigations show that normalized radar cross sections for C-band microwave sensors decrease under high wind conditions with certain incident angles instead of increase, as is the case for low to moderate wind speeds. This creates the problem of ambiguities in high wind speed retrievals from synthetic aperture radar (SAR). In the present work, four geophysical model functions (GMFs) are studied, namely the high wind C-band model 4 (CMOD4HW), C-band model 5 (CMOD5), the high wind vertical polarized GMF (HWGMF_VV), and the high wind horizontal polarized GMF (HWGMF_HH). Our focus is on model behaviours relative to wind speed ambiguities. We show that, except for CMOD4HW, the other GMFs exhibit the wind speed ambiguity problem. To consider this problem in high wind speed retrievals from SAR, we focus on hurricanes and propose a method to remove the speed ambiguity using the dominant hurricane wind structure.
Resumo:
Under strong ocean surface wind conditions, the normalized radar cross section of synthetic aperture radar (SAR) is dampened at certain incident angles, compared with the signals under moderate winds. This causes a wind speed ambiguity problem in wind speed retrievals from SAR, because two solutions may exist for each backscattered signal. This study shows that the problem is ubiquitous in the images acquired by operational space-borne SAR sensors. Moreover, the problem is more severe for the near range and range travelling winds. To remove this ambiguity, a method was developed based on characteristics of the hurricane wind structure. A SAR image of Hurricane Rita (2005) was analysed to demonstrate the wind speed ambiguity problem and the method to improve the wind speed retrievals. Our conclusions suggest that a speed ambiguity removal algorithm must be used for wind retrievals from SAR in intense storms and hurricanes.
Resumo:
Gridded sound speed data were calculated using Del Grosso's formulation from the temperature and salinity data at the PN section in the East China Sea covering 92 cruises between February 1978 and October 2000. The vertical gradients of sound speed are mainly related to the seasonal variations, and the strong horizontal gradients are mainly related to the Kuroshio and the upwelling. The standard deviations show that great variations of sound speed exist in the upper layer and in the slope zone. Empirical orthogonal function analysis shows that contributions of surface heating and the Kuroshio to sound speed variance are almost equivalent.
Resumo:
Supercritical fluid extraction (SFE) was used to extract homoisoflavonoids from Ophiopogon japonicus (Thunb.) Ker-Gawler. The optimization of parameters was carried out using an orthogonal test L-9 (3)(4) including pressure, temperature, dynamic extraction time and the amount of modifier. The process was then scaled up by 100 times with a preparative SFE system under the optimized conditions of 25 MPa, 55 degrees C, 4.0 h and 25% methanol as a modifier. Then crude extracts were separated and purified by high-speed counter-current chromatography (HSCCC) with a two-phase solvent system composed of n-hexane/ethyl acetate/methanol/ACN/water (1.8:1.0:1.0:1.2:1.0 v/v). There three homoisoflavonoidal compounds including methylophiopogonanone A 6-aldehydo-isoophiopogonone A, and 6-formyl-isoophiopogonanone A, were successfully isolated and purified in one step. The collected fractions were analyzed by HPLC. In each operation, 140 mg crude extracts was separated and yielded 15.3 mg of methylophiopogonanone A (96.9% purity), 4.1 mg of 6-aldehydo-isoophiopogonone A (98.3% purity) and 13.5 mg of 6-formyl-isoophiopogonanone A (97.3% purity) respectively. The chemical structure of the three homoisoflavonoids are identified by means of ESI-MS and NMR analysis.
Resumo:
High-speed counter-current chromatography (HSCCC) technique in semi-preparative scale has been successfully applied to the separation of bioactive flavonoid compounds, liquiritigenin and isoliquiritigenin in one step from the crude extract of Glycyrrhiza uralensis Risch. The HSCCC was performed using a two-phase solvent system composed of n-hexane-ethyl acetate-methanol-acetonitrile-water (2:2:1:0.6:2, v/v). Yields of liquiritigenin (98.9% purity) and isoliquiritigenin (98.3% purity) obtained were 0.52% and 0.32%. Chemical structures of the purified liquiritigenin and isoliquiritigenin were identified by electrospray ionization-MS (ESI-MS) and NMR analysis. (c) 2005 Published by Elsevier B.V.
Resumo:
With using short capillary column packed with porous and non-porous ODS stationary phases, high speed separation of 6 neutral aromatic compounds within 36 s by capillary electrochromatography (CEC) has been performed. Good reproducibility of the migration times for those solutes in high speed CEC was observed with RSD less than 1%. Both the linear velocity of EOF and the current linearly increases with the applied voltage, which means that the thermal effect by Joule heating was small. However, the capacity factor of solutes was found to decrease with the increase of the applied voltage, which was caused by the fact that about several seconds needed for the increase of voltage from 0 to applied value on a commercial CE instrument made larger contributions to the migration times of the early eluted compounds than those of lately eluted ones during high speed CEC, and voltage effect would increase with the higher applied voltage used. The linear relationship between the logarithm of capacity factor and the number of carbon for homologous compounds was observed, and positive value of slope means that the hydrophobicity of solutes is one of the main contribution factors to retention in high speed CEC packed with ODS stationary phases.
Resumo:
As multiprocessor system size scales upward, two important aspects of multiprocessor systems will generally get worse rather than better: (1) interprocessor communication latency will increase and (2) the probability that some component in the system will fail will increase. These problems can prevent us from realizing the potential benefits of large-scale multiprocessing. In this report we consider the problem of designing networks which simultaneously minimize communication latency while maximizing fault tolerance. Using a synergy of techniques including connection topologies, routing protocols, signalling techniques, and packaging technologies we assemble integrated, system-level solutions to this network design problem.
Resumo:
A new mesoporous sphere-like SBA-15 silica was synthesized and evaluated in terms of its suitability as stationary phases for CEC. The unique and attractive properties of the silica particle are its submicrometer particle size of 400 nm and highly ordered cylindrical mesopores with uniform pore size of 12 nm running along the same direction. The bare silica particles with submicrometer size have been successfully employed for the normal-phase electrochromatographic separation of polar compounds with high efficiency (e.g., 210 000 for thiourea), which is matched well with its submicrometer particle size. The Van Deemeter plot showed the hindrance to mass transfer because of the existence of pore structure. The lowest plate height of 2.0 mu m was obtained at the linear velocity of 1.1 mm/s. On the other hand, because of the relatively high linear velocity (e.g., 4.0 mm/s) can be generated, high-speed separation of neutral compounds, anilines, and basic pharmaceuticals in CEC with C-18-modified SBA-15 silica as stationary phases was achieved within 36, 60, and 34 s, respectively.
Resumo:
Timing data is infrequently reported in aphasiological literature and time taken is only a minor factor, where it is considered at all, in existing aphasia assessments. This is not surprising because reaction times are difficult to obtain manually, but it is a pity, because speed data should be indispensable in assessing the severity of language processing disorders and in evaluating the effects of treatment. This paper argues that reporting accuracy data without discussing speed of performance gives an incomplete and potentially misleading picture of any cognitive function. Moreover, in deciding how to treat, when to continue treatment and when to cease therapy, clinicians should have regard to both parameters: Speed and accuracy of performance. Crerar, Ellis and Dean (1996) reported a study in which the written sentence comprehension of 14 long-term agrammatic subjects was assessed and treated using a computer-based microworld. Some statistically significant and durable treatment effects were obtained after a short amount of focused therapy. Only accuracy data were reported in that (already long) paper, and interestingly, although it has been a widely read study, neither referees nor subsequent readers seemed to miss "the other side of the coin": How these participants compared with controls for their speed of processing and what effect treatment had on speed. This paper considers both aspects of the data and presents a tentative way of combining treatment effects on both accuracy and speed of performance in a single indicator. Looking at rehabilitation this way gives us a rather different perspective on which individuals benefited most from the intervention. It also demonstrates that while some subjects are capable of utilising metalinguistic skills to achieve normal accuracy scores even many years post-stroke, there is little prospect of reducing the time taken to within the normal range. Without considering speed of processing, the extent of this residual functional impairment can be overlooked.
Resumo:
The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.
Resumo:
This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.