937 resultados para rank-based procedure
Resumo:
How to recognize, announce and analyze incidents in internal medicine units is a daily challenge that is taught to all hospital staff. It allows suggesting useful improvements for patients, as well as for the medical department and the institution. Here is presented the assessment made in the CHUV internal medicine department one year after the beginning of the institutional procedure which promotes an open process regarding communication and risk management. The department of internal medicine underlines the importance of feedback to the reporters, ensures the staff of regular follow-up concerning the measures being taken and offers to external reporters such as general practioners the possibility of using this reporting system too.
Resumo:
Connectivity analysis on diffusion MRI data of the whole- brain suffers from distortions caused by the standard echo- planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a âeurooetheoretically correctâeuro and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.
Resumo:
Antibodies play an important role in therapy and investigative biomedical research. The TNF-family member Receptor Activator of NF-κB (RANK) is known for its role in bone homeostasis and is increasingly recognized as a central player in immune regulation and epithelial cell activation. However, the study of RANK biology has been hampered by missing or insufficient characterization of high affinity tools that recognize RANK. Here, we present a careful description and comparison of two antibodies, RANK-02 obtained by phage display (Newa, 2014 [1]) and R12-31 generated by immunization (Kamijo, 2006 [2]). We found that both antibodies recognized mouse RANK with high affinity, while RANK-02 and R12-31 recognized human RANK with high and lower affinities, respectively. Using a cell apoptosis assay based on stimulation of a RANK:Fas fusion protein, and a cellular NF-κB signaling assay, we showed that R12-31 was agonist for both species. R12-31 interfered little or not at all with the binding of RANKL to RANK, in contrast to RANK-02 that efficiently prevented this interaction. Depending on the assay and species, RANK-02 was either a weak agonist or a partial antagonist of RANK. Both antibodies recognized human Langerhans cells, previously shown to express RANK, while dermal dendritic cells were poorly labeled. In vivo R12-31 agonist activity was demonstrated by its ability to induce the formation of intestinal villous microfold cells in mice. This characterization of two monoclonal antibodies should now allow better evaluation of their application as therapeutic reagents and investigative tools.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Modeling ecological niches of species is a promising approach for predicting the geographic potential of invasive species in new environments. Argentine ants (Linepithema humile) rank among the most successful invasive species: native to South America, they have invaded broad areas worldwide. Despite their widespread success, little is known about what makes an area susceptible - or not - to invasion. Here, we use a genetic algorithm approach to ecological niche modeling based on high-resolution remote-sensing data to examine the roles of niche similarity and difference in predicting invasions by this species. Our comparisons support a picture of general conservatism of the species' ecological characteristics, in spite of distinct geographic and community contexts
Resumo:
A rapid and sensitive method is described for the determination of clofentezine residues in apple, papaya, mango and orange. The procedure is based on the extraction of the sample with a hexane:ethyl acetate mixture (1:1, v/v) and liquid chromatographic analysis using UV detection. Mean recoveries from 4 replicates of fortified fruit samples ranged from 81% to 96%, with coefficients of variation from 8.9% to 12.5%. The detection and quantification limits of the method were of 0.05 and 0.1 mg kg-1, respectively.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
The object of this work is the comparison of domain structure and off-diagonal magnetoimpedance effect in amorphous ribbons with different magnetostriction coefficient. The Co66Fe4Ni1Si15B14 and Fe80B20 samples were obtained by melt-spinning. During the quenching procedure a 0.07 T transverse magnetic field was applied to some of the samples. Domain patterns obtained by the Bitter technique confirm that the differences on the samples are related to the different anisotropy and magnetostriction coefficient, and the quenching procedure. Small changes on the anisotropy distribution and the magnetostriction coefficient can be detected by the off-diagonal impedance spectra as a consequence of the different permeability values of the samples
Resumo:
A method based on matrix solid-phase dispersion and gas chromatography-mass spectrometry to determine procymidone, malathion, bifenthrin and pirimicarb in honey is described. The best results were obtained using 1.0 g of honey, 1.0 g of silica-gel as dispersant sorbent and acetonitrile as eluting solvent. The method was validated by fortified honey samples at three concentration levels (0.2, 0.5 to 1.0 mg kg-1). Average recoveries (n=7) ranged from 54 to 84%, with relative standard deviations between 3.7 and 8.5%. Detection and quantification limits attained by the developed method ranged from 0.02 to 0.08 mg kg-1 and 0.07 to 0.25 mg kg-1 for the honey, respectively.
Resumo:
Twelve single-pustule isolates of Uromyces appendiculatus, the etiological agent of common bean rust, were collected in the state of Minas Gerais, Brazil, and classified according to the new international differential series and the binary nomenclature system proposed during the 3rd Bean Rust Workshop. These isolates have been used to select rust-resistant genotypes in a bean breeding program conducted by our group. The twelve isolates were classified into seven different physiological races: 21-3, 29-3, 53-3, 53-19, 61-3, 63-3 and 63-19. Races 61-3 and 63-3 were the most frequent in the area. They were represented by five and two isolates, respectively. The other races were represented by just one isolate. This is the first time the new international classification procedure has been used for U. appendiculatus physiological races in Brazil. The general adoption of this system will facilitate information exchange, allowing the cooperative use of the results obtained by different research groups throughout the world. The differential cultivars Mexico 309, Mexico 235 and PI 181996 showed resistance to all of the isolates that were characterized. It is suggested that these cultivars should be preferentially used as sources for resistance to rust in breeding programs targeting development lines adapted to the state of Minas Gerais.
Resumo:
Two sensitive spectrophotometric methods are described for the determination of lansoprazole (LPZ) in bulk drug and in capsule formulation. The methods are based on the oxidation of lansoprazole by insitu generated bromine followed by determination of unreacted bromine by two different reaction schemes. In one procedure (method A), the residual bromine is treated with excess of iron (II), and the resulting iron (III) is complexed with thiocyanate and measured at 470 nm. The second approach (method B) involves treating the unreacted bromine with a measured excess of iron (II) and remaining iron (II) is complexed with orthophenanthroline at a raised pH, and measured at 510 nm. In both methods, the amount of bromine reacted corresponds to the amount of LPZ. The experimental conditions were optimized. In method A, the absorbance is found to decrease linearly with the concentration of LPZ (r = -0.9986) where as in the method B a linear increase in absorbance occurs (r = 0.9986) The systems obey Beer's law for 0.5-4.0 and 0.5-6.0 µg mL-1 for method A and method B, respectively. The calculated molar absorptivity values are 3.97µ10(4) and 3.07µ10(4) L mol-1cm-1 for method A and method B, respectively, and the corresponding Sandell sensitivity values are 0.0039 and 0.0013 µg cm-2. The limit of detection (LOD) and quantification (LOQ) are also reported for both methods. Intra-day and inter-day precision, and accuracy of the methods were established as per the current ICH guidelines. The methods were successfully applied to the determination of LPZ in capsules and the results tallied well with the label claim and the results were statistically compared with those of a reference method by applying the Student's t-test and F-test. No interference was observed from the concomitant substances normally added to capsules. The accuracy and validity of the methods were further ascertained by performing recovery experiments via standard-addition method.
Resumo:
Two sensitive spectrophotometric methods are described for the determination of simvastatin (SMT) in bulk drug and in tablets. The methods are based on the oxidation of SMT by a measured excess of cerium (IV) in acid medium followed by determination of unreacted oxidant by two different reaction schemes. In one procedure (method A), the residual cerium (IV) is reacted with a fixed concentration of ferroin and the increase in absorbance is measured at 510 nm. The second approach (method B) involves the reduction of the unreacted cerium (IV) with a fixed quantity of iron (II), and the resulting iron (III) is complexed with thiocyanate and the absorbance measured at 470 nm. In both methods, the amount of cerium (IV) reacted corresponds to SMT concentration. The experimental conditions for both methods were optimized. In method A, the absorbance is found to increase linearly with SMT concentration (r = 0.9995) whereas in method B, the same decreased (r = -0.9943). The systems obey Beer's law for 0.6-7.5 and 0.5-5.0 µg mL-1 for method A and method B, respectively. The calculated molar absorptivity values are 2.7 X 10(4) and 1.06 X 10(5) Lmol-1 cm-1, respectively; and the corresponding sandel sensitivity values are 0.0153 and 0.0039µg cm-2, respectively. The limit of detection (LOD) and quantification (LOQ) are reported for both methods. Intra-day and inter-day precision, and accuracy of the methods were established as per the current ICH guidelines. The methods were successfully applied to the determination of SMT in tablets and the results were statistically compared with those of the reference method by applying the Student's t-test and F-test. No interference was observed from the common excipients added to tablets. The accuracy and validity of the methods were further ascertained by performing recovery experiments via standard addition procedure.
Resumo:
A spectrophotometric flow injection method for the determination of paracetamol in pharmaceutical formulations is proposed. The procedure was based on the oxidation of paracetamol by sodium hypochloride and the determination of the excess of this oxidant using o-tolidine dichloride as chromogenic reagent at 430 nm. The analytical curve was linear in the paracetamol concentration range from 8.50 x 10-6 to 2.51 x 10-4 mol L-1 with a detection limit of 5.0 x 10-6 mol L-1. The relative standard deviation was smaller than 1.2% for 1.20 x 10-4 mol L-1 paracetamol solution (n = 10). The results obtained for paracetamol in pharmaceutical formulations using the proposed flow injection method and those obtained using a USP Pharmacopoeia method are in agreement at the 95% confidence level.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.