948 resultados para parallel robots,cable driven,underactuated,calibration,sensitivity,accuracy
Resumo:
We report the self-assembly of a new family of hydrophobic,bis(pyridyl) PtII complexes featuring an extendedoligophenyleneethynylene-derived π-surface appended withsix long (dodecyloxy (2)) or short (methoxy (3)) side groups.Complex 2, containing dodecyloxy chains, forms fibrous assemblies with a slipped arrangement of the monomer units (dPt···Pt… =14 Å) in both nonpolar solvents and the solid state.Dispersion-corrected PM6 calculations suggest that this organizationis driven by cooperative π–π, C-H···Cl and π–Pt interactions, which is supported by EXAFS and 2D NMR spectroscopic analysis. In contrast, nearly parallel π-stacks (dPt···Pt… = 4.4 Å) stabilized by multiple π–π and C-H···Cl contact sare obtained in the crystalline state for 3 lacking longside chains, as shown by X-ray analysis and PM6 calculations.Our results reveal not only the key role of alkyl chain lengthin controlling self-assembly modes but also show the relevanceof Pt-bound chlorine ligands as new supramolecular synthons.
Resumo:
A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.
Resumo:
In urban areas, interchange spacing and the adequacy of design for weaving, merge, and diverge areas can significantly influence available capacity. Traffic microsimulation tools allow detailed analyses of these critical areas in complex locations that often yield results that differ from the generalized approach of the Highway Capacity Manual. In order to obtain valid results, various inputs should be calibrated to local conditions. This project investigated basic calibration factors for the simulation of traffic conditions within an urban freeway merge/diverge environment. By collecting and analyzing urban freeway traffic data from multiple sources, specific Iowa-based calibration factors for use in VISSIM were developed. In particular, a repeatable methodology for collecting standstill distance and headway/time gap data on urban freeways was applied to locations throughout the state of Iowa. This collection process relies on the manual processing of video for standstill distances and individual vehicle data from radar detectors to measure the headways/time gaps. By comparing the data collected from different locations, it was found that standstill distances vary by location and lead-follow vehicle types. Headways and time gaps were found to be consistent within the same driver population and across different driver populations when the conditions were similar. Both standstill distance and headway/time gap were found to follow fairly dispersed and skewed distributions. Therefore, it is recommended that microsimulation models be modified to include the option for standstill distance and headway/time gap to follow distributions as well as be set separately for different vehicle classes. In addition, for the driving behavior parameters that cannot be easily collected, a sensitivity analysis was conducted to examine the impact of these parameters on the capacity of the facility. The sensitivity analysis results can be used as a reference to manually adjust parameters to match the simulation results to the observed traffic conditions. A well-calibrated microsimulation model can enable a higher level of fidelity in modeling traffic behavior and serve to improve decision making in balancing need with investment.
Resumo:
This paper describes an parallel semi-Lagrangian finite difference approach to the pricing of early exercise Asian Options on assets with a stochastic volatility. A multigrid procedure is described for the fast iterative solution of the discrete linear complementarity problems that result. The accuracy and performance of this approach is improved considerably by a strike-price related analytic transformation of asset prices. Asian options are contingent claims with payoffs that depend on the average price of an asset over some time interval. The payoff may depend on this average and a fixed strike price (Fixed Strike Asians) or it may depend on the average and the asset price (Floating Strike Asians). The option may also permit early exercise (American contract) or confine the holder to a fixed exercise date (European contract). The Fixed Strike Asian with early exercise is considered here where continuous arithmetic averaging has been used. Pricing such an option where the asset price has a stochastic volatility leads to the requirement to solve a tri-variate partial differential inequation in the three state variables of asset price, average price and volatility (or equivalently, variance). The similarity transformations [6] used with Floating Strike Asian options to reduce the dimensionality of the problem are not applicable to Fixed Strikes and so the numerical solution of a tri-variate problem is necessary. The computational challenge is to provide accurate solutions sufficiently quickly to support realtime trading activities at a reasonable cost in terms of hardware requirements.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
In many areas of simulation, a crucial component for efficient numerical computations is the use of solution-driven adaptive features: locally adapted meshing or re-meshing; dynamically changing computational tasks. The full advantages of high performance computing (HPC) technology will thus only be able to be exploited when efficient parallel adaptive solvers can be realised. The resulting requirement for HPC software is for dynamic load balancing, which for many mesh-based applications means dynamic mesh re-partitioning. The DRAMA project has been initiated to address this issue, with a particular focus being the requirements of industrial Finite Element codes, but codes using Finite Volume formulations will also be able to make use of the project results.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
Proper organ patterning depends on a tight coordination between cell proliferation and differentiation. The patterning of Drosophila retina occurs both very fast and with high precision. This process is driven by the dynamic changes in signaling activity of the conserved Hedgehog (Hh) pathway, which coordinates cell fate determination, cell cycle and tissue morphogenesis. Here we show that during Drosophila retinogenesis, the retinal determination gene dachshund (dac) is not only a target of the Hh signaling pathway, but is also a modulator of its activity. Using developmental genetics techniques, we demonstrate that dac enhances Hh signaling by promoting the accumulation of the Gli transcription factor Cubitus interruptus (Ci) parallel to or downstream of fused. In the absence of dac, all Hh-mediated events associated to the morphogenetic furrow are delayed. One of the consequences is that, posterior to the furrow, dac- cells cannot activate a Roadkill-Cullin3 negative feedback loop that attenuates Hh signaling and which is necessary for retinal cells to continue normal differentiation. Therefore, dac is part of an essential positive feedback loop in the Hh pathway, guaranteeing the speed and the accuracy of Drosophila retinogenesis.
Resumo:
Background Physical activity in children with intellectual disabilities is a neglected area of study, which is most apparent in relation to physical activity measurement research. Although objective measures, specifically accelerometers, are widely used in research involving children with intellectual disabilities, existing research is based on measurement methods and data interpretation techniques generalised from typically developing children. However, due to physiological and biomechanical differences between these populations, questions have been raised in the existing literature on the validity of generalising data interpretation techniques from typically developing children to children with intellectual disabilities. Therefore, there is a need to conduct population-specific measurement research for children with intellectual disabilities and develop valid methods to interpret accelerometer data, which will increase our understanding of physical activity in this population. Methods Study 1: A systematic review was initially conducted to increase the knowledge base on how accelerometers were used within existing physical activity research involving children with intellectual disabilities and to identify important areas for future research. A systematic search strategy was used to identify relevant articles which used accelerometry-based monitors to quantify activity levels in ambulatory children with intellectual disabilities. Based on best practice guidelines, a novel form was developed to extract data based on 17 research components of accelerometer use. Accelerometer use in relation to best practice guidelines was calculated using percentage scores on a study-by-study and component-by-component basis. Study 2: To investigate the effect of data interpretation methods on the estimation of physical activity intensity in children with intellectual disabilities, a secondary data analysis was conducted. Nine existing sets of child-specific ActiGraph intensity cut points were applied to accelerometer data collected from 10 children with intellectual disabilities during an activity session. Four one-way repeated measures ANOVAs were used to examine differences in estimated time spent in sedentary, moderate, vigorous, and moderate to vigorous intensity activity. Post-hoc pairwise comparisons with Bonferroni adjustments were additionally used to identify where significant differences occurred. Study 3: The feasibility on a laboratory-based calibration protocol developed for typically developing children was investigated in children with intellectual disabilities. Specifically, the feasibility of activities, measurements, and recruitment was investigated. Five children with intellectual disabilities and five typically developing children participated in 14 treadmill-based and free-living activities. In addition, resting energy expenditure was measured and a treadmill-based graded exercise test was used to assess cardiorespiratory fitness. Breath-by-breath respiratory gas exchange and accelerometry were continually measured during all activities. Feasibility was assessed using observations, activity completion rates, and respiratory data. Study 4: Thirty-six children with intellectual disabilities participated in a semi-structured school-based physical activity session to calibrate accelerometry for the estimation of physical activity intensity. Participants wore a hip-mounted ActiGraph wGT3X+ accelerometer, with direct observation (SOFIT) used as the criterion measure. Receiver operating characteristic curve analyses were conducted to determine the optimal accelerometer cut points for sedentary, moderate, and vigorous intensity physical activity. Study 5: To cross-validate the calibrated cut points and compare classification accuracy with existing cut points developed in typically developing children, a sub-sample of 14 children with intellectual disabilities who participated in the school-based sessions, as described in Study 4, were included in this study. To examine the validity, classification agreement was investigated between the criterion measure of SOFIT and each set of cut points using sensitivity, specificity, total agreement, and Cohen’s kappa scores. Results Study 1: Ten full text articles were included in this review. The percentage of review criteria met ranged from 12%−47%. Various methods of accelerometer use were reported, with most use decisions not based on population-specific research. A lack of measurement research, specifically the calibration/validation of accelerometers for children with intellectual disabilities, is limiting the ability of researchers to make appropriate and valid accelerometer use decisions. Study 2: The choice of cut points had significant and clinically meaningful effects on the estimation of physical activity intensity and sedentary behaviour. For the 71-minute session, estimations for time spent in each intensity between cut points ranged from: sedentary = 9.50 (± 4.97) to 31.90 (± 6.77) minutes; moderate = 8.10 (± 4.07) to 40.40 (± 5.74) minutes; vigorous = 0.00 (± .00) to 17.40 (± 6.54) minutes; and moderate to vigorous = 8.80 (± 4.64) to 46.50 (± 6.02) minutes. Study 3: All typically developing participants and one participant with intellectual disabilities completed the protocol. No participant met the maximal criteria for the graded exercise test or attained a steady state during the resting measurements. Limitations were identified with the usability of respiratory gas exchange equipment and the validity of measurements. The school-based recruitment strategy was not effective, with a participation rate of 6%. Therefore, a laboratory-based calibration protocol was not feasible for children with intellectual disabilities. Study 4: The optimal vertical axis cut points (cpm) were ≤ 507 (sedentary), 1008−2300 (moderate), and ≥ 2301 (vigorous). Sensitivity scores ranged from 81−88%, specificity 81−85%, and AUC .87−.94. The optimal vector magnitude cut points (cpm) were ≤ 1863 (sedentary), ≥ 2610 (moderate) and ≥ 4215 (vigorous). Sensitivity scores ranged from 80−86%, specificity 77−82%, and AUC .86−.92. Therefore, the vertical axis cut points provide a higher level of accuracy in comparison to the vector magnitude cut points. Study 5: Substantial to excellent classification agreement was found for the calibrated cut points. The calibrated sedentary cut point (ĸ =.66) provided comparable classification agreement with existing cut points (ĸ =.55−.67). However, the existing moderate and vigorous cut points demonstrated low sensitivity (0.33−33.33% and 1.33−53.00%, respectively) and disproportionately high specificity (75.44−.98.12% and 94.61−100.00%, respectively), indicating that cut points developed in typically developing children are too high to accurately classify physical activity intensity in children with intellectual disabilities. Conclusions The studies reported in this thesis are the first to calibrate and validate accelerometry for the estimation of physical activity intensity in children with intellectual disabilities. In comparison with typically developing children, children with intellectual disabilities require lower cut points for the classification of moderate and vigorous intensity activity. Therefore, generalising existing cut points to children with intellectual disabilities will underestimate physical activity and introduce systematic measurement error, which could be a contributing factor to the low levels of physical activity reported for children with intellectual disabilities in previous research.
Resumo:
The main goal of LISA Path finder (LPF) mission is to estimate the acceleration noise models of the overall LISA Technology Package (LTP) experiment on-board. This will be of crucial importance for the future space-based Gravitational-Wave (GW) detectors, like eLISA. Here, we present the Bayesian analysis framework to process the planned system identification experiments designed for that purpose. In particular, we focus on the analysis strategies to predict the accuracy of the parameters that describe the system in all degrees of freedom. The data sets were generated during the latest operational simulations organised by the data analysis team and this work is part of the LTPDA Matlab toolbox.
Resumo:
Soft robots are robots made mostly or completely of soft, deformable, or compliant materials. As humanoid robotic technology takes on a wider range of applications, it has become apparent that they could replace humans in dangerous environments. Current attempts to create robotic hands for these environments are very difficult and costly to manufacture. Therefore, a robotic hand made with simplistic architecture and cheap fabrication techniques is needed. The goal of this thesis is to detail the design, fabrication, modeling, and testing of the SUR Hand. The SUR Hand is a soft, underactuated robotic hand designed to be cheaper and easier to manufacture than conventional hands. Yet, it maintains much of their dexterity and precision. This thesis will detail the design process for the soft pneumatic fingers, compliant palm, and flexible wrist. It will also discuss a semi-empirical model for finger design and the creation and validation of grasping models.
Resumo:
Complete and transparent reporting of key elements of diagnostic accuracy studies for infectious diseases in cultured and wild aquatic animals benefits end-users of these tests, enabling the rational design of surveillance programs, the assessment of test results from clinical cases and comparisons of diagnostic test performance. Based on deficiencies in the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines identified in a prior finfish study (Gardner et al. 2014), we adapted the Standards for Reporting of Animal Diagnostic Accuracy Studies—paratuberculosis (STRADAS-paraTB) checklist of 25 reporting items to increase their relevance to finfish, amphibians, molluscs, and crustaceans and provided examples and explanations for each item. The checklist, known as STRADAS-aquatic, was developed and refined by an expert group of 14 transdisciplinary scientists with experience in test evaluation studies using field and experimental samples, in operation of reference laboratories for aquatic animal pathogens, and in development of international aquatic animal health policy. The main changes to the STRADAS-paraTB checklist were to nomenclature related to the species, the addition of guidelines for experimental challenge studies, and the designation of some items as relevant only to experimental studies and ante-mortem tests. We believe that adoption of these guidelines will improve reporting of primary studies of test accuracy for aquatic animal diseases and facilitate assessment of their fitness-for-purpose. Given the importance of diagnostic tests to underpin the Sanitary and Phytosanitary agreement of the World Trade Organization, the principles outlined in this paper should be applied to other World Organisation for Animal Health (OIE)-relevant species.
Resumo:
Aquaculture has been expanded rapidly to become a major commercial and food-producing sector worldwide in recent decade. In parallel, viral diseases rapidly spread among farms causing enormous economic losses. The accurate detection of pathogens at early stages of infection is a key point for disease control in aquaculture. Spring Viraemia of Carp Virus (SVCV) is a very severe pathogen of carp fishes in different parts of the world and is categorized as a reportable listed disease in the annual published list of World Organization for animal Health (OIE). The objective of this study was to develop and evaluate RT- PCR test for detecting SVC virus and also the sensitivity and specificity of this test. A semi nested RT- PCR was designed using combination of three primers: two external (SVCF , SVCR) and one internal (SVCS) primers which based on conserved region of G gen. The specificity of designed primers (only external ones) by examination on Viral Hemorrhagic Septicemia Virus (VHSV) and Infectious Hematopoietic Necrosis Virus (IHNV) was confirmed. For optimizing of the PCR test, primer concentration, primer annealing temperature, cycle number and Mgcl2 concentration were surveyed. Also for validity test, prevention of false negative and Assurance of its accuracy, a competitive internal control (mimic) designed and its suitable concentration was defined. Evaluation of the sensitivity of designed test were conducted first by comparing the different commercially available RNA isolation guidelines, two guidelines: isotiocyanate phenol–chloroform based protocols (RNX–Plus Iran, Iq2000 kit Taiwan ) and two column based protocols (Cinna pure RNA Iran , high pure viral RNA kit, Roche Germany ). The results indicated that the column based protocols (Roche method and Cinna pure), yield 36.77 ng/μl and 16/47 ng/μl RNA concentration respectively, which were significantly higher than other protocols(P<0.05). Then for evaluation of extracted RNA sensitivity, Serial dilution of SVCV strain 56.70 grown in EPC (1.9×105 TCID50/ml) was examined To compare sensitivity. Extracted RNA from serial dilution with stone's primers and commercial IQ-2000 kit were examined simultaneously. The result indicated that designed semi- nested RT- PCR was able to recognize SVC virus to 10-4 dilution and stone's primer recognize to 10-3 dilution whereas Iq-2000 commercial kit did not recognized in any dilution. In high virus titer in designed test two DNA band (462 bp and 266 bp) produced, and by decreasing virus titer 462 bp was omitted. In low virus titer or lack of virus, just DNA band (mimic) 729 bp can propagate. After designing and optimizing PCR test, a total of 400 suspected cultured Cyprinus carpio with high mortality from 4 aquaculture zone of Khuzestan province were collected and tested for SVCV during 2012- 2013 using developed PCR method and IQ- 2000. The results indicated that SVC virus was not observed in samples using both methods.
Resumo:
This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface to find new hotspots, where ligands might potentially interact with, and which is implemented in massively parallel Graphics Processing Units, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to solve this problem, we propose a novel approach where neural networks are trained with databases of known active (drugs) and inactive compounds, and later used to improve VS predictions.