972 resultados para Order of the Brothers of the Sword.
Resumo:
The chemokine receptor, CCR5, responds to several chemokines leading to changes in activity in several signalling pathways. Here, we investigated the ability of different chemokines to provide differential activation of pathways. The effects of five CC chemokines acting at CCR5 were investigated for their ability to inhibit forskolin- stimulated 3'-5'-cyclic adenosine monophosphate (cAMP) accumulation and to stimulate Ca2+ mobilisation. in Chinese hamster ovary (CHO) cells expressing CCR5. Macrophage inflammatory protein 1 alpha (D26A) (MIP-1 alpha (D26A), CCL3 (D26A)), regulated on activation, normal T-cell expressed and secreted (RANTES, CCLS), MIP-1 beta (CCL4) and monocyte chemoattractant protein 2 (MCP-2, CCL8) were able to inhibit forskolin -stimulated CAMP accumulation, whilst MCP-4 (CCL13) could not elicit a response. CCL3 (D26A), CCL4, CCLS, CCL8 and CCL13 were able to stimulate Ca2+ mobilisation. through CCRS, although CCL3 (D26A) and CCL5 exhibited biphasic concentration-response curves. The Ca2+ responses induced by CCL4, CCL5, CCL8 and CCL13 were abolished by pertussis toxin, whereas the response to CCL3 (D26A) was only partially inhibited by pertussis toxin, indicating G(i/o)-independent signalling induced by this chemokine. Although the rank order of potency of chemokines was similar between the two assays, certain chemokines displayed different pharmacological profiles in cAMP inhibition and Ca2+ mobilisation assays. For instance, whilst CCL13 could not inhibit forskolin-stimulated cAMP accumulation, this chemokine was able to induce Ca2+ mobilisation via CCR5. It is concluded that different chemokines acting at CCR5 can induce different pharmacological responses, which may account for the broad spectrum of chemokines that can act at CCRS. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
Objective: To compare the frequency of nail biting in 4 settings (interventions) designed to elicit the functions of nail biting and to compare the results with a self-report questionnaire about the functions of nail biting. Design: Randomised allocation of participants to order of conditions. Setting: University Psychology Department. Subjects: Forty undergraduates who reported biting their nails. Interventions: Left alone (boredom), solving maths problems (frustration), reprimanded for nail biting (contingent attention), continuous conversation (noncontingent attention). Main Outcome measures: Number of times the undergraduates bit their nails. Results: Nail biting occurred most often in two conditions, boredom and frustration. Conclusion: Nail biting in young adults occurs as a result of boredom or working on difficult problems, which may reflect a particular emotional state. It occurs least often when people are engaged in social interaction or when they are reprimanded for the behavior. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
When people monitor a visual stream of rapidly presented stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset-the attentional blink. However, if T2 immediately follows T1, performance is often reported being as good as that at long lags-the so-called Lag-1 sparing effect. Two experiments investigated the mechanisms underlying this effect. Experiment 1 showed that, at Lag 1, requiring subjects to correctly report both identity and temporal order of targets produces relatively good performance on T2 but relatively bad performance on T1. Experiment 2 confirmed that subjects often confuse target order at short lags, especially if the two targets are equally easy to discriminate. Results suggest that, if two targets appear in close succession, they compete for attentional resources. If the two competitors are of unequal strength the stronger one is more likely to win and be reported at the expense of the other. If the two are equally strong, however, they will often be integrated into the same attentional episode and thus get both access to attentional resources. But this comes with a cost, as it eliminates information about the targets' temporal order.
Resumo:
Objective: To determine whether attractiveness and success of surgical outcome differ according to the timing of cleft lip repair. Design: Three experiments were conducted: (1) surgeons rated postoperative medical photographs of infants having either neonatal or 3-month lip repair; (2) lay panelists rated the same photographs; (3) lay panelists rated dynamic video displays of the infants made at 12 months. Normal comparison infants were also rated. The order of stimuli was randomized, and panelists were blind to timing of lip repair and the purposes of the study. Setting: Four U.K. regional centers for cleft lip and palate. Participants: Infants with isolated clefts of the lip, with and without palate. Intervention: Early lip repair was conducted at median age 4 days (inter-quartile range [IQR] = 4), and late repair at 104 days (IQR = 57). Main Outcome Measures: Ratings of surgical outcome (Experiment 1 only) and attractiveness (all experiments) on 5-point Likert scales. Results: In Experiment 1 success of surgical outcome was comparable for early and late repair groups (difference = -0.08; 95% confidence interval [CI] = -0.43 to 0.28; p = .66). In all three experiments, attractiveness ratings were comparable for the two groups. Differences were, respectively, 0.10 (95% CI = -2.3 to 0.44, p = .54); -0.11 (95% CI = -0.42 to -0.19, p = .54); and 0.08 (95% CI = -0.11 to 0.28, p =.41). Normal infants were rated more attractive than index infants (difference = 0.38; 95% CI = 0.24 to 0.52; p < .001). Conclusion: Neonatal repair for cleft of the lip confers no advantage over repair at 3 months in terms of perceived infant attractiveness or success of surgical outcome.
Resumo:
Hypercube is one of the most popular topologies for connecting processors in multicomputer systems. In this paper we address the maximum order of a connected component in a faulty cube. The results established include several known conclusions as special cases. We conclude that the hypercube structure is resilient as it includes a large connected component in the presence of large number of faulty vertices.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
The Stokes drift induced by surface waves distorts turbulence in the wind-driven mixed layer of the ocean, leading to the development of streamwise vortices, or Langmuir circulations, on a wide range of scales. We investigate the structure of the resulting Langmuir turbulence, and contrast it with the structure of shear turbulence, using rapid distortion theory (RDT) and kinematic simulation of turbulence. Firstly, these linear models show clearly why elongated streamwise vortices are produced in Langmuir turbulence, when Stokes drift tilts and stretches vertical vorticity into horizontal vorticity, whereas elongated streaky structures in streamwise velocity fluctuations (u) are produced in shear turbulence, because there is a cancellation in the streamwise vorticity equation and instead it is vertical vorticity that is amplified. Secondly, we develop scaling arguments, illustrated by analysing data from LES, that indicate that Langmuir turbulence is generated when the deformation of the turbulence by mean shear is much weaker than the deformation by the Stokes drift. These scalings motivate a quantitative RDT model of Langmuir turbulence that accounts for deformation of turbulence by Stokes drift and blocking by the air–sea interface that is shown to yield profiles of the velocity variances in good agreement with LES. The physical picture that emerges, at least in the LES, is as follows. Early in the life cycle of a Langmuir eddy initial turbulent disturbances of vertical vorticity are amplified algebraically by the Stokes drift into elongated streamwise vortices, the Langmuir eddies. The turbulence is thus in a near two-component state, with suppressed and . Near the surface, over a depth of order the integral length scale of the turbulence, the vertical velocity (w) is brought to zero by blocking of the air–sea interface. Since the turbulence is nearly two-component, this vertical energy is transferred into the spanwise fluctuations, considerably enhancing at the interface. After a time of order half the eddy decorrelation time the nonlinear processes, such as distortion by the strain field of the surrounding eddies, arrest the deformation and the Langmuir eddy decays. Presumably, Langmuir turbulence then consists of a statistically steady state of such Langmuir eddies. The analysis then provides a dynamical connection between the flow structures in LES of Langmuir turbulence and the dominant balance between Stokes production and dissipation in the turbulent kinetic energy budget, found by previous authors.
Resumo:
We have studied enantiospecific differences in the adsorption of (S)- and (R)-alanine on Cu{531}R using low-energy electron diffraction (LEED), X-ray photoelectron spectroscopy, and near edge X-ray absorption fine structure (NEXAFS) spectroscopy. At saturation coverage, alanine adsorbs as alaninate forming a p(1 4) superstructure. LEED shows a significantly higher degree of long-range order for the S than for the R enantiomer. Also carbon K-edge NEXAFS spectra show differences between (S)- and (R)-alanine in the variations of the ð resonance when the linear polarization vector is rotated within the surface plane. This indicates differences in the local adsorption geometries of the molecules, most likely caused by the interaction between the methyl group and the metal surface and/or intermolecular hydrogen bonds. Comparison with model calculations and additional information from LEED and photoelectron spectroscopy suggest that both enantiomers of alaninate adsorb in two different orientations associated with triangular adsorption sites on {110} and {311} microfacets of the Cu{531} surface. The experimental data are ambiguous as to the exact difference between the local geometries of the two enantiomers. In one of two models that fit the data equally well, significantly more (R)-alaninate molecules are adsorbed on {110} sites than on {311} sites whereas for (S)-alaninate the numbers are equal. The enantiospecific differences found in these experiments are much more pronounced than those reported from other ultrahigh vacuum techniques applied to similar systems.
Resumo:
The conquest of Normandy by Philip Augustus of France effectively ended the ‘Anglo-Norman’ realm created in 1066, forcing cross-Channel landholders to choose between their English and their Norman estates. The best source for the resulting tenurial upheaval in England is the Rotulus de valore terrarum Normannorum, a list of seized properties and their former holders, and this article seeks to expand our understanding of the impact of the loss of Normandy through a detailed analysis of this document. First, it demonstrates that the compilation of the roll can be divided into two distinct stages, the first containing valuations taken before royal justices in June 1204 and enrolled before the end of July, and the second consisting of returns to orders for the valuation of particular properties issued during the summer and autumn, as part of the process by which these estates were committed to new holders. Second, study of the roll and other documentary sources permits a better understanding of the order for the seizure of the lands of those who had remained in Normandy, the text of which does not survive. This establishes that this royal order was issued in late May 1204 and, further, that it enjoined the temporary seizure rather than the permanent confiscation of these lands. Moreover, the seizure was not retrospective and covers a specific window of time in 1204. On the one hand, this means that the roll is far from a comprehensive record of terre Normannorum. On the other hand, it is possible to correlate the identities of those Anglo-Norman landholders whose English estates were seized with the military progress of the French king through the duchy in May and June and thus shed new light on the campaign of 1204. Third, the article considers the initial management of the seized estates and highlights the fact that, when making arrangements for the these lands, John was primarily concerned to maintain his freedom of manoeuvre, since he was not prepared to accept that Normandy had been lost for good.
Resumo:
This paper describes time-resolved x-ray diffraction data monitoring the transformation of one inverse bicontinuous cubic mesophase into another, in a hydrated lipid system. The first section of the paper describes a mechanism for the transformation that conserves the topology of the bilayer, based on the work of Charvolin and Sadoc, Fogden and Hyde, and Benedicto and O'Brien in this area. We show a pictorial representation of this mechanism, in terms of both the water channels and the lipid bilayer. The second section describes the experimental results obtained. The system under investigation was 2:1 lauric acid: dilauroylphosphatidylcholine at a hydration of 50% water by weight. A pressure-jump was used to induce a phase transition from the gyroid (Q(II)(G)) to the diamond (Q(II)(D)) bicontinuous cubic mesophase, which was monitored by time-resolved x-ray diffraction. The lattice parameter of both mesophases was found to decrease slightly throughout the transformation, but at the stage where the Q(II)(D) phase first appeared, the ratio of lattice parameters of the two phases was found to be approximately constant for all pressure-jump experiments. The value is consistent with a topology-preserving mechanism. However, the polydomain nature of our sample prevents us from confirming that the specific pathway is that described in the first section of the paper. Our data also reveal signals from two different intermediate structures, one of which we have identified as the inverse hexagonal (H-II) mesophase. We suggest that it plays a role in the transfer of water during the transformation. The rate of the phase transition was found to increase with both temperature and pressure-jump amplitude, and its time scale varied from the order of seconds to minutes, depending on the conditions employed.
Resumo:
The scarcity and stochastic nature of genetic mutations presents a significant challenge for scientists seeking to characterise de novo mutation frequency at specific loci. Such mutations can be particularly numerous during regeneration of plants from in vitro culture and can undermine the value of germplasm conservation efforts. We used cleaved amplified polymorphic sequence (CAPS) analysis to characterise new mutations amongst a clonal population of cocoa plants regenerated via a somatic embryogenesis protocol used previously for cocoa cryopreservation. Efficacy of the CAPS system for mutation detection was greatly improved after an ‘a priori’ in silico screen of reference target sequences for actual and potential restriction enzyme recognition sites using a new freely available software called Artbio. Artbio surveys known sequences for existing restriction enzyme recognition sites but also identifies all single nucleotide polymorphism (SNP) deviations from such motifs. Using this software, we performed an in silico screen of seven loci for restriction sites and their potential mutant SNP variants that were possible from 21 restriction enzymes. The four most informative locus-enzyme combinations were then used to survey the regenerant populations for de novo mutants. We characterised the pattern of point mutations and, using the outputs of Artbio, calculated the ratio of base substitution in 114 somatic embryo-derived cocoa regenerants originating from two explant genotypes. We found 49 polymorphisms, comprising 26.3% of the samples screened, with an inferred rate of 2.8 × 10−3 substitutions/screened base. This elevated rate is of a similar order of magnitude to previous reports of de novo microsatellite length mutations arising in the crop and suggests caution should be exercised when applying somatic embryogenesis for the conservation of plant germplasm.
Resumo:
In this paper,the Prony's method is applied to the time-domain waveform data modelling in the presence of noise.The following three problems encountered in this work are studied:(1)determination of the order of waveform;(2)de-termination of numbers of multiple roots;(3)determination of the residues.The methods of solving these problems are given and simulated on the computer.Finally,an output pulse of model PG-10N signal generator and the distorted waveform obtained by transmitting the pulse above mentioned through a piece of coaxial cable are modelled,and satisfactory results are obtained.So the effectiveness of Prony's method in waveform data modelling in the presence of noise is confirmed.
Resumo:
Emergency vehicles use high-amplitude sirens to warn pedestrians and other road users of their presence. Unfortunately, the siren noise enters the vehicle and corrupts the intelligibility of two-way radio voice com-munications from the emergency vehicle to a control room. Often the siren has to be turned off to enable the control room to hear what is being said which subsequently endangers people's lives. A digital signal processing (DSP) based system for the cancellation of siren noise embedded within speech is presented. The system has been tested with the least mean square (LMS), normalised least mean square (NLMS) and affine projection algorithm (APA) using recordings from three common types of sirens (two-tone, wail and yelp) from actual test vehicles. It was found that the APA with a projection order of 2 gives comparably improved cancellation over the LMS and NLMS with only a moderate increase in algorithm complexity and code size. Therefore, this siren noise cancellation system using the APA offers an improvement in cancellation achieved by previous systems. The removal of the siren noise improves the response time for the emergency vehicle and thus the system can contribute to saving lives. The system also allows voice communication to take place even when the siren is on and as such the vehicle offers less risk of danger when moving at high speeds in heavy traffic.
Resumo:
Large magnitude explosive eruptions are the result of the rapid and large-scale transport of silicic magma stored in the Earth's crust, but the mechanics of erupting teratonnes of silicic magma remain poorly understood. Here, we demonstrate that the combined effect of local crustal extension and magma chamber overpressure can sustain linear dyke-fed explosive eruptions with mass fluxes in excess of 10^10 kg/s from shallow-seated (4–6 km depth) chambers during moderate extensional stresses. Early eruption column collapse is facilitated with eruption duration of the order of few days with an intensity of at least one order of magnitude greater than the largest eruptions in the 20th century. The conditions explored in this study are one way in which high mass eruption rates can be achieved to feed large explosive eruptions. Our results corroborate geological and volcanological evidences from volcano-tectonic complexes such as the Sierra Madre Occidental (Mexico) and the Taupo Volcanic Zone (New Zealand).
Resumo:
In this paper we examine the order of integration of EuroSterling interest rates by employing techniques that can allow for a structural break under the null and/or alternative hypothesis of the unit-root tests. In light of these results, we investigate the cointegrating relationship implied by the single, linear expectations hypothesis of the term structure of interest rates employing two techniques, one of which allows for the possibility of a break in the mean of the cointegrating relationship. The aim of the paper is to investigate whether or not the interest rate series can be viewed as I(1) processes and furthermore, to consider whether there has been a structural break in the series. We also determine whether, if we allow for a break in the cointegration analysis, the results are consistent with those obtained when a break is not allowed for. The main results reported in this paper support the conjecture that the ‘short’ Euro-currency rates are characterised as I(1) series that exhibit a structural break on or near Black Wednesday, 16 September 1992, whereas the ‘long’ rates are I(1) series that do not support the presence of a structural break. The evidence from the cointegration analysis suggests that tests of the expectations hypothesis based on data sets that include the ERM crisis period, or a period that includes a structural break, might be problematic if the structural break is not explicitly taken into account in the testing framework.