961 resultados para Fast Algorithm
Resumo:
This study shows the possibility offered by modern ultra-high performance supercritical fluid chromatography combined with tandem mass spectrometry in doping control analysis. A high throughput screening method was developed for 100 substances belonging to the challenging classes of anabolic agents, hormones and metabolic modulators, synthetic cannabinoids and glucocorticoids, which should be detected at low concentrations in urine. To selectively extract these doping agents from urine, a supported liquid extraction procedure was implemented in a 48-well plate format. At the tested concentration levels ranging from 0.5 to 5 ng/mL, the recoveries were better than 70% for 48-68% of the compounds and higher than 50% for 83-87% of the tested substances. Due to the numerous interferences related to isomers of steroids and ions produced by the loss of water in the electrospray source, the choice of SFC separation conditions was very challenging. After careful optimization, a Diol stationary phase was employed. The total analysis time for the screening assay was only 8 min, and interferences as well as susceptibility to matrix effect (ME) were minimized. With the developed method, about 70% of the compounds had relative ME within the range ±20%, at a concentration of 1 and 5 ng/mL. Finally, limits of detection achieved with the above-described strategy including 5-fold preconcentration were below 0.1 ng/mL for the majority of the tested compounds. Therefore, LODs were systematically better than the minimum required performance levels established by the World anti-doping agency, except for very few metabolites.
Resumo:
Trastuzumab (Herceptin ®, Roche) is approved in UK for the treatment of the metastatic breast cancer since 2001. As of 2005, concomitantly with the publication of 3 studies that showed it produces a 50% reduction of the recurrence rates of breast cancer, trastuzumab started to be prescribed in the earlt adjuvant treatrnent of this disease. Und June 2006, trastuzumab did not have both: 1) regulatory approval and 2) NICE [National Institute for Health and Clinical Excellence] recommendation for the use in early stages of breast cancer. During the period until June 2006, the trastuzumab use in those patients was not reimbursed and because the cost of trastuzumab is equal with the yearly UK average income, most of patients could not self fund their treatrnent. Before the publication of the final NICE guidance, the new data of trastuzumab in early breast cancer raised enormous patient and professional interest and expectations. A great volume of public and professional pressure was generated to transcend a system by which Primary Care Trusts can reimburse a treatment only after a formal guidance was issued. This paper draw on a case study depicting and analyzing the process by which regulatory approval and NICE recommendations were achieved in a record time and how trastuzumab became a standard treatment on early adjuvant breast cancer. According to the data we gathered in this work we were witnessing one of the fastest processes of adoption of a health care technology since the creation of NICE, in 1999. This study addresses the following research question: How and why does the adoption pattern of trastuzumab differ from the rational decision-making model of the reimbursement process in UK? [Author, p. 4]
Resumo:
Abstract Gastrointestinal bleeding represents a common medical emergency, with considerable morbidity and mortality rates, and a prompt diagnosis is essential for a better prognosis. In such a context, endoscopy is the main diagnostic tool; however, in cases where the gastrointestinal hemorrhage is massive, the exact bleeding site might go undetected. In addition, a trained professional is not always present to perform the procedure. In an emergency setting, optical colonoscopy presents limitations connected with the absence of bowel preparation, so most of the small bowel cannot be assessed. Scintigraphy cannot accurately demonstrate the anatomic location of the bleeding and is not available at emergency settings. The use of capsule endoscopy is inappropriate in the acute setting, particularly in the emergency department at night, and is a highly expensive method. Digital angiography, despite its high sensitivity, is invasive, presents catheterization-related risks, in addition to its low availability at emergency settings. On the other hand, computed tomography angiography is fast, widely available and minimally invasive, emerging as a promising method in the diagnostic algorithm of these patients, being capable of determining the location and cause of bleeding with high accuracy. Based on a critical literature review and on their own experience, the authors propose a computed tomography angiography protocol to assess the patient with gastrointestinal bleeding.
Resumo:
Identification of chemical compounds with specific biological activities is an important step in both chemical biology and drug discovery. When the structure of the intended target is available, one approach is to use molecular docking programs to assess the chemical complementarity of small molecules with the target; such calculations provide a qualitative measure of affinity that can be used in virtual screening (VS) to rank order a list of compounds according to their potential to be active. rDock is a molecular docking program developed at Vernalis for high-throughput VS (HTVS) applications. Evolved from RiboDock, the program can be used against proteins and nucleic acids, is designed to be computationally very efficient and allows the user to incorporate additional constraints and information as a bias to guide docking. This article provides an overview of the program structure and features and compares rDock to two reference programs, AutoDock Vina (open source) and Schrodinger's Glide (commercial). In terms of computational speed for VS, rDock is faster than Vina and comparable to Glide. For binding mode prediction, rDock and Vina are superior to Glide. The VS performance of rDock is significantly better than Vina, but inferior to Glide for most systems unless pharmacophore constraints are used; in that case rDock and Glide are of equal performance. The program is released under the Lesser General Public License and is freely available for download, together with the manuals, example files and the complete test sets, at http://rdock.sourceforge.net/
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
Here we discuss two consecutive MERLIN observations of the X-ray binary LS I +61° 303 . The first observation shows a double-sided jet extending up to about 200 AU on both sides of a central source. The jet shows a bent S-shaped structure similar to the one displayed by the well-known precessing jet of SS 433 . The precession suggested in the first MERLIN image becomes evident in the second one, showing a one-sided bent jet significantly rotated with respect to the jet of the day before. We conclude that the derived precession of the relativistic (beta=0.6) jet explains puzzling previous VLBI results. Moreover, the fact that the precession is fast could be the explanation of the never understood short term (days) variability of the associated gamma-ray source 2CG 135+01 / 3EG J0241+6103
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
In this paper the authors propose a new closed contour descriptor that could be seen as a Feature Extractor of closed contours based on the Discrete Hartley Transform (DHT), its main characteristic is that uses only half of the coefficients required by Elliptical Fourier Descriptors (EFD) to obtain a contour approximation with similar error measure. The proposed closed contour descriptor provides an excellent capability of information compression useful for a great number of AI applications. Moreover it can provide scale, position and rotation invariance, and last but not least it has the advantage that both the parameterization and the reconstructed shape from the compressed set can be computed very efficiently by the fast Discrete Hartley Transform (DHT) algorithm. This Feature Extractor could be useful when the application claims for reversible features and when the user needs and easy measure of the quality for a given level of compression, scalable from low to very high quality.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
We adapt the Shout and Act algorithm to Digital Objects Preservation where agents explore file systems looking for digital objects to be preserved (victims). When they find something they “shout” so that agent mates can hear it. The louder the shout, the urgent or most important the finding is. Louder shouts can also refer to closeness. We perform several experiments to show that this system works very scalably, showing that heterogeneous teams of agents outperform homogeneous ones over a wide range of tasks complexity. The target at-risk documents are MS Office documents (including an RTF file) with Excel content or in Excel format. Thus, an interesting conclusion from the experiments is that fewer heterogeneous (varying skills) agents can equal the performance of many homogeneous (combined super-skilled) agents, implying significant performance increases with lower overall cost growth. Our results impact the design of Digital Objects Preservation teams: a properly designed combination of heterogeneous teams is cheaper and more scalable when confronted with uncertain maps of digital objects that need to be preserved. A cost pyramid is proposed for engineers to use for modeling the most effective agent combinations
Resumo:
As wireless communications evolve towards heterogeneousnetworks, mobile terminals have been enabled tohandover seamlessly from one network to another. At the sametime, the continuous increase in the terminal power consumptionhas resulted in an ever-decreasing battery lifetime. To that end,the network selection is expected to play a key role on howto minimize the energy consumption, and thus to extend theterminal lifetime. Hitherto, terminals select the network thatprovides the highest received power. However, it has been provedthat this solution does not provide the highest energy efficiency.Thus, this paper proposes an energy efficient vertical handoveralgorithm that selects the most energy efficient network thatminimizes the uplink power consumption. The performance of theproposed algorithm is evaluated through extensive simulationsand it is shown to achieve high energy efficiency gains comparedto the conventional approach.
Resumo:
Fast atom bombardment mass spectroscopy has been used to study a large number of cationic phosphine-containing transition-metal-gold clusters, which ranged in mass from 1000 to 4000. Many of these clusters have been previously characterized and were examined in order to test the usefulness of the FABMS technique. Results showed that FABMS is excellent in giving the correct molecular formula and when combined with NMR, IR, and microanalysis gave a reliable characterization for cationic clusters¹. Recently FABMS has become one of the techniques employed as routine in cluster characterization2,3 and also is an effective tool for the structure analysis of large biomolecules4. Some results in the present work reinforce the importance of these data in the characterization of clusters in the absence of crystals with quality for X-ray analysis.
Resumo:
Chromium(III) at the ng L-1 level was extracted using partially silylated MCM-41 modified by a tetraazamacrocyclic compound (TAMC) and determined by inductively coupled plasma optical emision spectrometry (ICP OES). The extraction time and efficiency, pH and flow rate, type and minimum amount of stripping acid, and break- through volume were investigated. The method's enrichment factor and detection limit are 300 and 45.5 pg mL-1, respectively. The maximum capacity of the 10 mg of modified silylated MCM-41 was found to be 400.5±4.7 µg for Cr(III). The method was applied to the determination of Cr(III) and Cr(VI) in the wastewater of the chromium electroplating industry and in environmental and biological samples (black tea, hot and black pepper).