997 resultados para matching function
Resumo:
In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.
Resumo:
The encoding of goal-oriented motion events varies across different languages. Speakers of languages without grammatical aspect (e.g., Swedish) tend to mention motion endpoints when describing events, e.g., “two nuns walk to a house,”, and attach importance to event endpoints when matching scenes from memory. Speakers of aspect languages (e.g., English), on the other hand, are more prone to direct attention to the ongoingness of motion events, which is reflected both in their event descriptions, e.g., “two nuns are walking.”, and in their non-verbal similarity judgements. This study examines to what extent native speakers of Swedish (n = 82) with English as a foreign language (FL) restructure their categorisation of goal-oriented motion as a function of their English proficiency and experience with the English language (e.g., exposure, learning). Seventeen monolingual native English speakers from the United Kingdom (UK) were engaged for comparison purposes. Data on motion event cognition were collected through a memory-based triads matching task, in which a target scene with an intermediate degree of endpoint orientation was matched with two alternative scenes with low and high degrees of endpoint orientation, respectively. Results showed that the preference among the Swedish speakers of L2 English to base their similarity judgements on ongoingness rather than event endpoints was correlated with their use of English in their everyday lives, such that those who often watched television in English approximated the ongoingness preference of the English native speakers. These findings suggest that event cognition patterns may be restructured through the exposure to FL audio-visual media. The results thus add to the emerging picture that learning a new language entails learning new ways of observing and reasoning about reality.
Resumo:
This paper describes two solutions for systematic measurement of surface elevation that can be used for both profile and surface reconstructions for quantitative fractography case studies. The first one is developed under Khoros graphical interface environment. It consists of an adaption of the almost classical area matching algorithm, that is based on cross-correlation operations, to the well-known method of parallax measurements from stereo pairs. A normalization function was created to avoid false cross-correlation peaks, driving to the true window best matching solution at each region analyzed on both stereo projections. Some limitations to the use of scanning electron microscopy and the types of surface patterns are also discussed. The second algorithm is based on a spatial correlation function. This solution is implemented under the NIH Image macro programming, combining a good representation for low contrast regions and many improvements on overall user interface and performance. Its advantages and limitations are also presented.
Resumo:
Clinicians frequently have to decide when dialysis should be initiated and which modality should be used to support kidney function in critically ill patients with acute kidney injury. In most instances, these decisions are made based on the consideration of a variety of factors including patient condition, available resources and prevailing local practice experience. There is a wide variation worldwide in how these factors influence the timing of initiation and the utilization of various modalities. In this article, we review the therapeutic goals of renal support and the relative advantages and shortcomings of different dialysis techniques. We describe strategies for matching the timing of initiation to the choice of modality to individualize renal support in intensive care unit patients. Copyright (C) 2012 S. Karger AG, Basel
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
Anaesthesia causes a respiratory impairment, whether the patient is breathing spontaneously or is ventilated mechanically. This impairment impedes the matching of alveolar ventilation and perfusion and thus the oxygenation of arterial blood. A triggering factor is loss of muscle tone that causes a fall in the resting lung volume, functional residual capacity. This fall promotes airway closure and gas adsorption, leading eventually to alveolar collapse, that is, atelectasis. The higher the oxygen concentration, the faster will the gas be adsorbed and the aleveoli collapse. Preoxygenation is a major cause of atelectasis and continuing use of high oxygen concentration maintains or increases the lung collapse, that typically is 10% or more of the lung tissue. It can exceed 25% to 40%. Perfusion of the atelectasis causes shunt and cyclic airway closure causes regions with low ventilation/perfusion ratios, that add to impaired oxygenation. Ventilation with positive end-expiratory pressure reduces the atelectasis but oxygenation need not improve, because of shift of blood flow down the lung to any remaining atelectatic tissue. Inflation of the lung to an airway pressure of 40 cmH2O recruits almost all collapsed lung and the lung remains open if ventilation is with moderate oxygen concentration (< 40%) but recollapses within a few minutes if ventilation is with 100% oxygen. Severe obesity increases the lung collapse and obstructive lung disease and one-lung anesthesia increase the mismatch of ventilation and perfusion. CO2 pneumoperitoneum increases atelectasis formation but not shunt, likely explained by enhanced hypoxic pulmonary vasoconstriction by CO2. Atelectasis may persist in the postoperative period and contribute to pneumonia.
Resumo:
A common time scale for the EPICA ice cores from Dome C (EDC) and Dronning Maud Land (EDML) has been established. Since the EDML core was not drilled on a dome, the development of the EDML1 time scale for the EPICA ice core drilled in Dronning Maud Land was based on the creation of a detailed stratigraphic link between EDML and EDC, which was dated by a simpler 1D ice-flow model. The synchronisation between the two EPICA ice cores was done through the identification of several common volcanic signatures. This paper describes the rigorous method, using the signature of volcanic sulfate, which was employed for the last 52 kyr of the record. We estimated the discrepancies between the modelled EDC and EDML glaciological age scales during the studied period, by evaluating the ratio R of the apparent duration of temporal intervals between pairs of isochrones. On average R ranges between 0.8 and 1.2 corresponding to an uncertainty of up to 20% in the estimate of the time duration in at least one of the two ice cores. Significant deviations of R up to 1.4–1.5 are observed between 18 and 28 kyr before present (BP), where present is defined as 1950. At this stage our approach does not allow us unequivocally to find out which of the models is affected by errors, but assuming that the thinning function at both sites and accumulation history at Dome C (which was drilled on a dome) are correct, this anomaly can be ascribed to a complex spatial accumulation variability (which may be different in the past compared to the present day) upstream of the EDML core.
Resumo:
This paper considers ocean fisheries as complex adaptive systems and addresses the question of how human institutions might be best matched to their structure and function. Ocean ecosystems operate at multiple scales, but the management of fisheries tends to be aimed at a single species considered at a single broad scale. The paper argues that this mismatch of ecological and management scale makes it difficult to address the fine-scale aspects of ocean ecosystems, and leads to fishing rights and strategies that tend to erode the underlying structure of populations and the system itself. A successful transition to ecosystem-based management will require institutions better able to economize on the acquisition of feedback about the impact of human activities. This is likely to be achieved by multiscale institutions whose organization mirrors the spatial organization of the ecosystem and whose communications occur through a polycentric network. Better feedback will allow the exploration of fine-scale science and the employment of fine-scale fishing restraints, better adapted to the behavior of fish and habitat. The scale and scope of individual fishing rights also needs to be congruent with the spatial structure of the ecosystem. Place-based rights can be expected to create a longer private planning horizon as well as stronger incentives for the private and public acquisition of system relevant knowledge.
Resumo:
A real-time large scale part-to-part video matching algorithm, based on the cross correlation of the intensity of motion curves, is proposed with a view to originality recognition, video database cleansing, copyright enforcement, video tagging or video result re-ranking. Moreover, it is suggested how the most representative hashes and distance functions - strada, discrete cosine transformation, Marr-Hildreth and radial - should be integrated in order for the matching algorithm to be invariant against blur, compression and rotation distortions: (R; _) 2 [1; 20]_[1; 8], from 512_512 to 32_32pixels2 and from 10 to 180_. The DCT hash is invariant against blur and compression up to 64x64 pixels2. Nevertheless, although its performance against rotation is the best, with a success up to 70%, it should be combined with the Marr-Hildreth distance function. With the latter, the image selected by the DCT hash should be at a distance lower than 1.15 times the Marr-Hildreth minimum distance.
Resumo:
NO synthases are widely distributed in the lung and are extensively involved in the control of airway and vascular homeostasis. It is recognized, however, that the O2-rich environment of the lung may predispose NO toward toxicity. These Janus faces of NO are manifest in recent clinical trials with inhaled NO gas, which has shown therapeutic benefit in some patient populations but increased morbidity in others. In the airways and circulation of humans, most NO bioactivity is packaged in the form of S-nitrosothiols (SNOs), which are relatively resistant to toxic reactions with O2/O\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{2}^{-}}}\end{equation*}\end{document}. This finding has led to the proposition that channeling of NO into SNOs may provide a natural defense against lung toxicity. The means to selectively manipulate the SNO pool, however, has not been previously possible. Here we report on a gas, O-nitrosoethanol (ENO), which does not react with O2 or release NO and which markedly increases the concentration of indigenous species of SNO within airway lining fluid. Inhalation of ENO provided immediate relief from hypoxic pulmonary vasoconstriction without affecting systemic hemodynamics. Further, in a porcine model of lung injury, there was no rebound in cardiopulmonary hemodynamics or fall in oxygenation on stopping the drug (as seen with NO gas), and additionally ENO protected against a decline in cardiac output. Our data suggest that SNOs within the lung serve in matching ventilation to perfusion, and can be manipulated for therapeutic gain. Thus, ENO may be of particular benefit to patients with pulmonary hypertension, hypoxemia, and/or right heart failure, and may offer a new therapeutic approach in disorders such as asthma and cystic fibrosis, where the airways may be depleted of SNOs.
Resumo:
The rate of generation of fluctuations with respect to the scalar values conditioned on the mixture fraction, which significantly affects turbulent nonpremixed combustion processes, is examined. Simulation of the rate in a major mixing model is investigated and the derived equations can assist in selecting the model parameters so that the level of conditional fluctuations is better reproduced by the models. A more general formulation of the multiple mapping conditioning (MMC) model that distinguishes the reference and conditioning variables is suggested. This formulation can be viewed as a methodology of enforcing certain desired conditional properties onto conventional mixing models. Examples of constructing consistent MMC models with dissipation and velocity conditioning as well as of combining MMC with large eddy simulations (LES) are also provided. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
We present new measurements of the luminosity function (LF) of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) and the 2dF SDSS LRG and Quasar (2SLAQ) survey. We have carefully quantified, and corrected for, uncertainties in the K and evolutionary corrections, differences in the colour selection methods, and the effects of photometric errors, thus ensuring we are studying the same galaxy population in both surveys. Using a limited subset of 6326 SDSS LRGs (with 0.17 < z < 0.24) and 1725 2SLAQ LRGs (with 0.5 < z < 0.6), for which the matching colour selection is most reliable, we find no evidence for any additional evolution in the LRG LF, over this redshift range, beyond that expected from a simple passive evolution model. This lack of additional evolution is quantified using the comoving luminosity density of SDSS and 2SLAQ LRGs, brighter than M-0.2r - 5 log h(0.7) = - 22.5, which are 2.51 +/- 0.03 x 10(-7) L circle dot Mpc(-3) and 2.44 +/- 0.15 x 10(-7) L circle dot Mpc(-3), respectively (< 10 per cent uncertainty). We compare our LFs to the COMBO-17 data and find excellent agreement over the same redshift range. Together, these surveys show no evidence for additional evolution (beyond passive) in the LF of LRGs brighter than M-0.2r - 5 log h(0.7) = - 21 ( or brighter than similar to L-*).. We test our SDSS and 2SLAQ LFs against a simple 'dry merger' model for the evolution of massive red galaxies and find that at least half of the LRGs at z similar or equal to 0.2 must already have been well assembled (with more than half their stellar mass) by z similar or equal to 0.6. This limit is barely consistent with recent results from semi-analytical models of galaxy evolution.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
The aim of this work was to investigate human contrast perception at various contrast levels ranging from detection threshold to suprathreshold levels by using psychophysical techniques. The work consists of two major parts. The first part deals with contrast matching, and the second part deals with contrast discrimination. Contrast matching technique was used to determine when the perceived contrasts of different stimuli were equal. The effects of spatial frequency, stimulus area, image complexity and chromatic contrast on contrast detection thresholds and matches were studied. These factors influenced detection thresholds and perceived contrast at low contrast levels. However, at suprathreshold contrast levels perceived contrast became directly proportional to the physical contrast of the stimulus and almost independent of factors affecting detection thresholds. Contrast discrimination was studied by measuring contrast increment thresholds which indicate the smallest detectable contrast difference. The effects of stimulus area, external spatial image noise and retinal illuminance were studied. The above factors affected contrast detection thresholds and increment thresholds measured at low contrast levels. At high contrast levels, contrast increment thresholds became very similar so that the effect of these factors decreased. Human contrast perception was modelled by regarding the visual system as a simple image processing system. A visual signal is first low-pass filtered by the ocular optics. This is followed by spatial high-pass filtering by the neural visual pathways, and addition of internal neural noise. Detection is mediated by a local matched filter which is a weighted replica of the stimulus whose sampling efficiency decreases with increasing stimulus area and complexity. According to the model, the signals to be compared in a contrast matching task are first transferred through the early image processing stages mentioned above. Then they are filtered by a restoring transfer function which compensates for the low-level filtering and limited spatial integration at high contrast levels. Perceived contrasts of the stimuli are equal when the restored responses to the stimuli are equal. According to the model, the signals to be discriminated in a contrast discrimination task first go through the early image processing stages, after which signal dependent noise is added to the matched filter responses. The decision made by the human brain is based on the comparison between the responses of the matched filters to the stimuli, and the accuracy of the decision is limited by pre- and post-filter noises. The model for human contrast perception could accurately describe the results of contrast matching and discrimination in various conditions.
Resumo:
The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.