947 resultados para high-frequency conversion
Resumo:
We study the structure and shear flow behavior of a side-on liquid crystalline triblock copolymer, named PBA-b-PA444-b-PBA (PBA is poly(butyl acrylate) and PA444 is a poly(acrylate) with a nematic liquid crystal side-on mesogen), in the self-assembled lamellar phase and in the disordered phase. Simultaneous oscillatory shear and small-angle X-ray scattering experiments show that shearing PBA-b-PA444-b-PBA at high frequency and strain amplitudes leads to the alignment of the lamellae with normals perpendicular to the shear direction and to the velocity gradient direction, i.e., in the perpendicular orientation. The order-to-disorder transition temperature (T-ODT) is independent of the applied strain, in contrast to results reported in the literature for coil-coil diblock copolymers, which show an increase in T-ODT with shear rate. It is possible that in our system, T-ODT does not depend on the applied strain because the fluctuations are weaker than those present in coil-coil diblock copolymer systems.
Resumo:
Background: Inadvertent drilling on the ossicular chain is one of the causes of sensorineural hearing loss (HL) that may follow tympanomastoid surgery. A high-frequency HL is most frequently observed. It is speculated that the HL is a result of vibration of the ossicular chain resembling acoustic noise trauma. It is generally considered that using a large cutting burr is more likely to cause damage than a small diamond burr. Aim: The aim was to investigate the equivalent noise level and its frequency characteristics generated by drilling onto the short process of the incus in fresh human temporal bones. Methods and Materials: Five fresh cadaveric temporal bones were used. Stapes displacement was measured using laser Doppler vibrometry during short drilling episodes. Diamond. and cutting burrs of different diameters were used. The effect of the drilling on stapes footplate displacement was compared with that generated by an acoustic signal. The equivalent noise level (dB sound pressure level equivalent [SPL eq]) was thus calculated. Results: The equivalent noise levels generated ranged from 93 to 125 dB SPL eq. For a 1-mm cutting burr, the highest equivalent noise level was 108 dB SPL eq, whereas a 2.3-mm cutting burr produced a maximal level of 125 dB SPL eq. Diamond burrs generated less noise than their cutting counterparts, with a 2.3-mm diamond burr producing a highest equivalent noise level of 102, dB SPL eq. The energy of the noise increased at the higher end of the frequency spectrum, with a 2.3-mm cutting burr producing a noise level of 105 dB SPL eq at 1 kHz and 125 dB SPL eq at 8 kHz. In contrast, the same sized diamond burr produced 96 dB SPL eq at 1 kHz and 99 dB at 8 kHz. Conclusion:This study suggests that drilling on the ossicular chain can produce vibratory force that is analogous with noise levels known to produce acoustic trauma. For the same type of burr, the larger the diameter, the greater the vibratory force, and for the same size of burr, the cutting burr creates more vibratory force than the diamond burr. The cutting burr produces greater high-frequency than lower-frequency vibratory energy.
Resumo:
This study investigates the human response to impulse perturbations at the midpoint of a haptically-guided straight-line point-to-point movement. Such perturbation response may be used as an assessment tool during robot-mediated neuro-rehabilitation therapy. Subjects show variety in their perturbation responses. Movements with a lower perturbation displacement exhibit high frequency oscillations, indicative of increased joint stiffness. Equally, movements with a high perturbation displacement exhibit lower frequency oscillations with higher amplitude and a longer settling time. Some subjects show unexpected transients during the perturbation impulse, which may be caused by complex joint interactions in the hand and arm.
Resumo:
Although extensively studied within the lidar community, the multiple scattering phenomenon has always been considered a rare curiosity by radar meteorologists. Up to few years ago its appearance has only been associated with two- or three-body-scattering features (e.g. hail flares and mirror images) involving highly reflective surfaces. Recent atmospheric research aimed at better understanding of the water cycle and the role played by clouds and precipitation in affecting the Earth's climate has driven the deployment of high frequency radars in space. Examples are the TRMM 13.5 GHz, the CloudSat 94 GHz, the upcoming EarthCARE 94 GHz, and the GPM dual 13-35 GHz radars. These systems are able to detect the vertical distribution of hydrometeors and thus provide crucial feedbacks for radiation and climate studies. The shift towards higher frequencies increases the sensitivity to hydrometeors, improves the spatial resolution and reduces the size and weight of the radar systems. On the other hand, higher frequency radars are affected by stronger extinction, especially in the presence of large precipitating particles (e.g. raindrops or hail particles), which may eventually drive the signal below the minimum detection threshold. In such circumstances the interpretation of the radar equation via the single scattering approximation may be problematic. Errors will be large when the radiation emitted from the radar after interacting more than once with the medium still contributes substantially to the received power. This is the case if the transport mean-free-path becomes comparable with the instrument footprint (determined by the antenna beam-width and the platform altitude). This situation resembles to what has already been experienced in lidar observations, but with a predominance of wide- versus small-angle scattering events. At millimeter wavelengths, hydrometeors diffuse radiation rather isotropically compared to the visible or near infrared region where scattering is predominantly in the forward direction. A complete understanding of radiation transport modeling and data analysis methods under wide-angle multiple scattering conditions is mandatory for a correct interpretation of echoes observed by space-borne millimeter radars. This paper reviews the status of research in this field. Different numerical techniques currently implemented to account for higher order scattering are reviewed and their weaknesses and strengths highlighted. Examples of simulated radar backscattering profiles are provided with particular emphasis given to situations in which the multiple scattering contributions become comparable or overwhelm the single scattering signal. We show evidences of multiple scattering effects from air-borne and from CloudSat observations, i.e. unique signatures which cannot be explained by single scattering theory. Ideas how to identify and tackle the multiple scattering effects are discussed. Finally perspectives and suggestions for future work are outlined. This work represents a reference-guide for studies focused at modeling the radiation transport and at interpreting data from high frequency space-borne radar systems that probe highly opaque scattering media such as thick ice clouds or precipitating clouds.
Resumo:
In situ precipitation measurements can extremely differ in space and time. Taking into account the limited spatial–temporal representativity and the uncertainty of a single station is important for validating mesoscale numerical model results as well as for interpreting remote sensing data. In situ precipitation data from a high resolution network in North-Eastern Germany are analysed to determine their temporal and spatial representativity. For the dry year 2003 precipitation amounts were available with 10 min resolution from 14 rain gauges distributed in an area of 25 km 25 km around the Meteorological Observatory Lindenberg (Richard-Aßmann Observatory). Our analysis reveals that short-term (up to 6 h) precipitation events dominate (94% of all events) and that the distribution is skewed with a high frequency of very low precipitation amounts. Long-lasting precipitation events are rare (6% of all precipitation events), but account for nearly 50% of the annual precipitation. The spatial representativity of a single-site measurement increases slightly for longer measurement intervals and the variability decreases. Hourly precipitation amounts are representative for an area of 11 km 11 km. Daily precipitation amounts appear to be reliable with an uncertainty factor of 3.3 for an area of 25 km 25 km, and weekly and monthly precipitation amounts have uncertainties of a factor of 2 and 1.4 when compared to 25 km 25 km mean values.
Resumo:
Multi-factor approaches to analysis of real estate returns have, since the pioneering work of Chan, Hendershott and Sanders (1990), emphasised a macro-variables approach in preference to the latent factor approach that formed the original basis of the arbitrage pricing theory. With increasing use of high frequency data and trading strategies and with a growing emphasis on the risks of extreme events, the macro-variable procedure has some deficiencies. This paper explores a third way, with the use of an alternative to the standard principal components approach – independent components analysis (ICA). ICA seeks higher moment independence and maximises in relation to a chosen risk parameter. We apply an ICA based on kurtosis maximisation to weekly US REIT data using a kurtosis maximising algorithm. The results show that ICA is successful in capturing the kurtosis characteristics of REIT returns, offering possibilities for the development of risk management strategies that are sensitive to extreme events and tail distributions.
Resumo:
This paper examines two hydrochemical time-series derived from stream samples taken in the Upper Hafren catchment, Plynlimon, Wales. One time-series comprises data collected at 7-hour intervals over 22 months (Neal et al., submitted, this issue), while the other is based on weekly sampling over 20 years. A subset of determinands: aluminium, calcium, chloride, conductivity, dissolved organic carbon, iron, nitrate, pH, silicon and sulphate are examined within a framework of non-stationary time-series analysis to identify determinand trends, seasonality and short-term dynamics. The results demonstrate that both long-term and high-frequency monitoring provide valuable and unique insights into the hydrochemistry of a catchment. The long-term data allowed analysis of long-termtrends, demonstrating continued increases in DOC concentrations accompanied by declining SO4 concentrations within the stream, and provided new insights into the changing amplitude and phase of the seasonality of the determinands such as DOC and Al. Additionally, these data proved invaluable for placing the short-term variability demonstrated within the high-frequency data within context. The 7-hour data highlighted complex diurnal cycles for NO3, Ca and Fe with cycles displaying changes in phase and amplitude on a seasonal basis. The high-frequency data also demonstrated the need to consider the impact that the time of sample collection can have on the summary statistics of the data and also that sampling during the hours of darkness provides additional hydrochemical information for determinands which exhibit pronounced diurnal variability. Moving forward, this research demonstrates the need for both long-term and high-frequency monitoring to facilitate a full and accurate understanding of catchment hydrochemical dynamics.
Resumo:
In this paper we describe how to cope with the delays inherent in a real time control system for a steerable stereo head/eye platform. A purposive and reactive system requires the use of fast vision algorithms to provide the controller with the error signals to drive the platform. The time-critical implementation of these algorithms is necessary, not only to enable short latency reaction to real world events, but also to provide sufficiently high frequency results with small enough delays that controller remain stable. However, even with precise knowledge of that delay, nonlinearities in the plant make modelling of that plant impossible, thus precluding the use of a Smith Regulator. Moreover, the major delay in the system is in the feedback (image capture and vision processing) rather than feed forward (controller) loop. Delays ranging between 40msecs and 80msecs are common for the simple 2D processes, but might extend to several hundred milliseconds for more sophisticated 3D processes. The strategy presented gives precise control over the gaze direction of the cameras despite the lack of a priori knowledge of the delays involved. The resulting controller is shown to have a similar structure to the Smith Regulator, but with essential modifications.
Resumo:
This paper compares the performance of artificial neural networks (ANNs) with that of the modified Black model in both pricing and hedging Short Sterling options. Using high frequency data, standard and hybrid ANNs are trained to generate option prices. The hybrid ANN is significantly superior to both the modified Black model and the standard ANN in pricing call and put options. Hedge ratios for hedging Short Sterling options positions using Short Sterling futures are produced using the standard and hybrid ANN pricing models, the modified Black model, and also standard and hybrid ANNs trained directly on the hedge ratios. The performance of hedge ratios from ANNs directly trained on actual hedge ratios is significantly superior to those based on a pricing model, and to the modified Black model.
Resumo:
Aim: To develop a list of prescribing indicators specific for the hospital setting that would facilitate the prospective collection of high severity and/or high frequency prescribing errors, which are also amenable to electronic clinical decision support (CDS). Method: A three-stage consensus technique (electronic Delphi) was carried out with 20 expert pharmacists and physicians across England. Participants were asked to score prescribing errors using a 5-point Likert scale for their likelihood of occurrence and the severity of the most likely outcome. These were combined to produce risk scores, from which median scores were calculated for each indicator across the participants in the study. The degree of consensus between the participants was defined as the proportion that gave a risk score in the same category as the median. Indicators were included if a consensus of 80% or more was achieved. Results: A total of 80 prescribing errors were identified by consensus as being high or extreme risk. The most common drug classes named within the indicators were antibiotics (n=13), antidepressants (n=8), nonsteroidal anti-inflammatory drugs (n=6), and opioid analgesics (n=6).The most frequent error type identified as high or extreme risk were those classified as clinical contraindications (n=29/80). Conclusion: 80 high risk prescribing errors in the hospital setting have been identified by an expert panel. These indicators can serve as the basis for a standardised, validated tool for the collection of data in both paperbased and electronic prescribing processes, as well as to assess the impact of electronic decision support implementation or development.
Resumo:
We present a new sparse shape modeling framework on the Laplace-Beltrami (LB) eigenfunctions. Traditionally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes by forming a Fourier series expansion. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we propose to filter out only the significant eigenfunctions by imposing l1-penalty. The new sparse framework can further avoid additional surface-based smoothing often used in the field. The proposed approach is applied in investigating the influence of age (38-79 years) and gender on amygdala and hippocampus shapes in the normal population. In addition, we show how the emotional response is related to the anatomy of the subcortical structures.
Resumo:
Patches of ionization are common in the polar ionosphere where their motion and associated density gradients give variable disturbances to High Frequency (HF) radio communications, over-the-horizon radar location errors, and disruption and errors to satellite navigation and communication. Their formation and evolution are poorly understood, particularly under disturbed space weather conditions. We report direct observations of the full evolution of patches during a geomagnetic storm, including formation, polar cap entry, transpolar evolution, polar cap exit, and sunward return flow. Our observations show that modulation of nightside reconnection in the substorm cycle of the magnetosphere helps form the gaps between patches where steady convection would give a “tongue” of ionization (TOI).
Resumo:
While stirring and mixing properties in the stratosphere are reasonably well understood in the context of balanced (slow) dynamics, as is evidenced in numerous studies of chaotic advection, the strongly enhanced presence of high-frequency gravity waves in the mesosphere gives rise to a significant unbalanced (fast) component to the flow. The present investigation analyses result from two idealized shallow-water numerical simulations representative of stratospheric and mesospheric dynamics on a quasi-horizontal isentropic surface. A generalization of the Hua–Klein Eulerian diagnostic to divergent flow reveals that velocity gradients are strongly influenced by the unbalanced component of the flow. The Lagrangian diagnostic of patchiness nevertheless demonstrates the persistence of coherent features in the zonal component of the flow, in contrast to the destruction of coherent features in the meridional component. Single-particle statistics demonstrate t2 scaling for both the stratospheric and mesospheric regimes in the case of zonal dispersion, and distinctive scaling laws for the two regimes in the case of meridional dispersion. This is in contrast to two-particle statistics, which in the mesospheric (unbalanced) regime demonstrate a more rapid approach to Richardson’s t3 law in the case of zonal dispersion and is evidence of enhanced meridional dispersion.
Resumo:
Introgression in Festulolium is a potentially powerful tool to isolate genes for a large number of traits which differ between Festuca pratensis Huds. and Lolium perenne L. Not only are hybrids between the two species fertile, but the two genomes can be distinguished by genomic in situ hybridisation and a high frequency of recombination occurs between homoeologous chromosomes and chromosome segments. By a programme of introgression and a series of backcrosses, L. perenne lines have been produced which contain small F. pratensis substitutions. This material is a rich source of polymorphic markers targeted towards any trait carried on the F. pratensis substitution not observed in the L. perenne background. We describe here the construction of an F. pratensis BAC library, which establishes the basis of a map-based cloning strategy in L. perenne. The library contains 49,152 clones, with an average insert size of 112 kbp, providing coverage of 2.5 haploid genome equivalents. We have screened the library for eight amplified fragment length polymorphism (AFLP) derived markers known to be linked to an F. pratensis gene introgressed into L. perenne and conferring a staygreen phenotype as a consequence of a mutation in primary chlorophyll catabolism. While for four of the markers it was possible to identify bacterial artificial chromosome (BAC) clones, the other four AFLPs were too repetitive to enable reliable identification of locus-specific BACs. Moreover, when the four BACs were partially sequenced, no obvious coding regions could be identified. This contrasted to BACs identified using cDNA sequences, when multiple genes were identified on the same BAC.
Resumo:
This chapter highlights similarities and differences of equity and fixed- income markets and provides an overview of the characteristics of European government bond market trading and liquidity. Most existing studies focus on the U.S. market. This chapter presents the institutional details of the MTS market, which is the largest European electronic platform for trading government, quasi-government, asset- backed, and corporate fixed- income securities. It reviews the main features of high- frequency fixed- income data and the methods for measuring market liquidity. Finally, the chapter shows how liquidity differs across European countries, how liquidity varies with the structure of the market, and how liquidity has changed during the recent liquidity and sovereign crises.