38 resultados para matching
em CentAUR: Central Archive University of Reading - UK
Resumo:
In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.
Resumo:
The frequency responses of two 50 Hz and one 400 Hz induction machines have been measured experimentally over a frequency range of 1 kHz to 400 kHz. This study has shown that the stator impedances of the machines behave in a similar manner to a parallel resonant circuit, and hence have a resonant point at which the Input impedance of the machine is at a maximum. This maximum impedance point was found experimentally to be as low as 33 kHz, which is well within the switching frequency ranges of modern inverter drives. This paper investigates the possibility of exploiting the maximum impedance point of the machine, by taking it into consideration when designing an inverter, in order to minimize ripple currents due to the switching frequency. Minimization of the ripple currents would reduce torque pulsation and losses, increasing overall performance. A modified machine model was developed to take into account the resonant point, and this model was then simulated with an inverter to demonstrate the possible advantages of matching the inverter switching frequency to the resonant point. Finally, in order to experimentally verify the simulated results, a real inverter with a variable switching frequency was used to drive an induction machine. Experimental results are presented.
Resumo:
An approach to the automatic generation of efficient Field Programmable Gate Arrays (FPGAs) circuits for the Regular Expression-based (RegEx) Pattern Matching problems is presented. Using a novel design strategy, as proposed, circuits that are highly area-and-time-efficient can be automatically generated for arbitrary sets of regular expressions. This makes the technique suitable for applications that must handle very large sets of patterns at high speed, such as in the network security and intrusion detection application domains. We have combined several existing techniques to optimise our solution for such domains and proposed the way the whole process of dynamic generation of FPGAs for RegEX pattern matching could be automated efficiently.
Resumo:
In this paper, we introduce two kinds of graphs: the generalized matching networks (GMNs) and the recursive generalized matching networks (RGMNs). The former generalize the hypercube-like networks (HLNs), while the latter include the generalized cubes and the star graphs. We prove that a GMN on a family of k-connected building graphs is -connected. We then prove that a GMN on a family of Hamiltonian-connected building graphs having at least three vertices each is Hamiltonian-connected. Our conclusions generalize some previously known results.
Resumo:
Knowledge management has become a promising method in supporting the clinicians′ decisions and improving the quality of medical services in the constantly changing clinical environment. However, current medical knowledge management systems cannot understand users′ requirements accurately and realize personalized matching. Therefore this paper proposes an ontological approach based on semiotic principles to personalized medical knowledge matching. In particular, healthcare domain knowledge is conceptualized and an ontology-based user profile is built. Furthmore, the personalized matching mechanism and algorithm are illustrated.
Resumo:
Background Plants form the base of the terrestrial food chain and provide medicines, fuel, fibre and industrial materials to humans. Vascular land plants rely on their roots to acquire the water and mineral elements necessary for their survival in nature or their yield and nutritional quality in agriculture. Major biogeochemical fluxes of all elements occur through plant roots, and the roots of agricultural crops have a significant role to play in soil sustainability, carbon sequestration, reducing emissions of greenhouse gasses, and in preventing the eutrophication of water bodies associated with the application of mineral fertilisers. ● Scope This article provides the context for a Special Issue of Annals of Botany on ‘Matching Roots to Their Environment’. It first examines how land plants and their roots evolved, describes how the ecology of roots and their rhizospheres contributes to the acquisition of soil resources, and discusses the influence of plant roots on biogeochemical cycles. It then describes the role of roots in overcoming the constraints to crop production imposed by hostile or infertile soils, illustrates root phenotypes that improve the acquisition of mineral elements and water, and discusses high-throughput methods to screen for these traits in the laboratory, glasshouse and field. Finally, it considers whether knowledge of adaptations improving the acquisition of resources in natural environments can be used to develop root systems for sustainable agriculture in the future.
Resumo:
Understanding the relationships between trait diversity, species diversity and ecosystem functioning is essential for sustainable management. For functions comprising two trophic levels, trait matching between interacting partners should also drive functioning. However, the predictive ability of trait diversity and matching is unclear for most functions, particularly for crop pollination, where interacting partners did not necessarily co-evolve. World-wide, we collected data on traits of flower visitors and crops, visitation rates to crop flowers per insect species and fruit set in 469 fields of 33 crop systems. Through hierarchical mixed-effects models, we tested whether flower visitor trait diversity and/or trait matching between flower visitors and crops improve the prediction of crop fruit set (functioning) beyond flower visitor species diversity and abundance. Flower visitor trait diversity was positively related to fruit set, but surprisingly did not explain more variation than flower visitor species diversity. The best prediction of fruit set was obtained by matching traits of flower visitors (body size and mouthpart length) and crops (nectar accessibility of flowers) in addition to flower visitor abundance, species richness and species evenness. Fruit set increased with species richness, and more so in assemblages with high evenness, indicating that additional species of flower visitors contribute more to crop pollination when species abundances are similar. Synthesis and applications. Despite contrasting floral traits for crops world-wide, only the abundance of a few pollinator species is commonly managed for greater yield. Our results suggest that the identification and enhancement of pollinator species with traits matching those of the focal crop, as well as the enhancement of pollinator richness and evenness, will increase crop yield beyond current practices. Furthermore, we show that field practitioners can predict and manage agroecosystems for pollination services based on knowledge of just a few traits that are known for a wide range of flower visitor species.
Resumo:
The ITCT-Lagrangian-2K4 (Intercontinental Transport and Chemical Transformation) experiment was conceived with an aim to quantify the effects of photochemistry and mixing on the transformation of air masses in the free troposphere away from emissions. To this end, attempts were made to intercept and sample air masses several times during their journey across the North Atlantic using four aircraft based in New Hampshire (USA), Faial (Azores) and Creil (France). This article begins by describing forecasts from two Lagrangian models that were used to direct the aircraft into target air masses. A novel technique then identifies Lagrangian matches between flight segments. Two independent searches are conducted: for Lagrangian model matches and for pairs of whole air samples with matching hydrocarbon fingerprints. The information is filtered further by searching for matching hydrocarbon samples that are linked by matching trajectories. The quality of these "coincident matches'' is assessed using temperature, humidity and tracer observations. The technique pulls out five clear Lagrangian cases covering a variety of situations and these are examined in detail. The matching trajectories and hydrocarbon fingerprints are shown, and the downwind minus upwind differences in tracers are discussed.
Resumo:
An improved algorithm for the generation of gridded window brightness temperatures is presented. The primary data source is the International Satellite Cloud Climatology Project, level B3 data, covering the period from July 1983 to the present. The algorithm rakes window brightness, temperatures from multiple satellites, both geostationary and polar orbiting, which have already been navigated and normalized radiometrically to the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer, and generates 3-hourly global images on a 0.5 degrees by 0.5 degrees latitude-longitude grid. The gridding uses a hierarchical scheme based on spherical kernel estimators. As part of the gridding procedure, the geostationary data are corrected for limb effects using a simple empirical correction to the radiances, from which the corrected temperatures are computed. This is in addition to the application of satellite zenith angle weighting to downweight limb pixels in preference to nearer-nadir pixels. The polar orbiter data are windowed on the target time with temporal weighting to account for the noncontemporaneous nature of the data. Large regions of missing data are interpolated from adjacent processed images using a form of motion compensated interpolation based on the estimation of motion vectors using an hierarchical block matching scheme. Examples are shown of the various stages in the process. Also shown are examples of the usefulness of this type of data in GCM validation.
Resumo:
Data from four recent reanalysis projects [ECMWF, NCEP-NCAR, NCEP - Department of Energy ( DOE), NASA] have been diagnosed at the scale of synoptic weather systems using an objective feature tracking method. The tracking statistics indicate that, overall, the reanalyses correspond very well in the Northern Hemisphere (NH) lower troposphere, although differences for the spatial distribution of mean intensities show that the ECMWF reanalysis is systematically stronger in the main storm track regions but weaker around major orographic features. A direct comparison of the track ensembles indicates a number of systems with a broad range of intensities that compare well among the reanalyses. In addition, a number of small-scale weak systems are found that have no correspondence among the reanalyses or that only correspond upon relaxing the matching criteria, indicating possible differences in location and/or temporal coherence. These are distributed throughout the storm tracks, particularly in the regions known for small-scale activity, such as secondary development regions and the Mediterranean. For the Southern Hemisphere (SH), agreement is found to be generally less consistent in the lower troposphere with significant differences in both track density and mean intensity. The systems that correspond between the various reanalyses are considerably reduced and those that do not match span a broad range of storm intensities. Relaxing the matching criteria indicates that there is a larger degree of uncertainty in both the location of systems and their intensities compared with the NH. At upper-tropospheric levels, significant differences in the level of activity occur between the ECMWF reanalysis and the other reanalyses in both the NH and SH winters. This occurs due to a lack of coherence in the apparent propagation of the systems in ERA15 and appears most acute above 500 hPa. This is probably due to the use of optimal interpolation data assimilation in ERA15. Also shown are results based on using the same techniques to diagnose the tropical easterly wave activity. Results indicate that the wave activity is sensitive not only to the resolution and assimilation methods used but also to the model formulation.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the tarte relative to other objects was varied, the ration of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying geometric reconstruction.
Resumo:
Given the non-monotonic form of the radiocarbon calibration curve, the precision of single C-14 dates on the calendar timescale will always be limited. One way around this limitation is through comparison of time-series, which should exhibit the same irregular patterning as the calibration curve. This approach can be employed most directly in the case of wood samples with many years growth present (but not able to be dated by dendrochronology), where the tree-ring series of unknown date can be compared against the similarly constructed C-14 calibration curve built from known-age wood. This process of curve-fitting has come to be called "wiggle-matching." In this paper, we look at the requirements for getting good precision by this method: sequence length, sampling frequency, and measurement precision. We also look at 3 case studies: one a piece of wood which has been independently dendrochronologically dated, and two others of unknown age relating to archaeological activity at Silchester, UK (Roman) and Miletos, Anatolia (relating to the volcanic eruption at Thera).