871 resultados para Anisotropic Analytical Algorithm
Resumo:
Microparticles are phospholipid vesicles shed mostly in biological fluids, such as blood or urine, by various types of cells, such as red blood cells (RBCs), platelets, lymphocytes, endothelial cells. These microparticles contain a subset of the proteome of their parent cell, and their ready availability in biological fluid has raised strong interest in their study, as they might be markers of cell damage. However, their small size as well as their particular physico-chemical properties makes them hard to detect, size, count and study by proteome analysis. In this review, we report the pre-analytical and methodological caveats that we have faced in our own research about red blood cell microparticles in the context of transfusion science, as well as examples from the literature on the proteomics of various kinds of microparticles.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
Nominal Unification is an extension of first-order unification where terms can contain binders and unification is performed modulo α equivalence. Here we prove that the existence of nominal unifiers can be decided in quadratic time. First, we linearly-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quadràtic time. Finally, we point out how using ideas of Brown and Tarjan for unbalanced merging, we could solve these reduced problems more efficiently
Resumo:
The educational sphere has an internal function relatively agreed by social scientists. Nonetheless, the contribution that educational systems provide to the society (i.e., their social function) does not have the same degree of consensus. Taking into consideration such theoretical precedent, the current article raises an analytical schema to grasp the social function of education considering a sociological perspective. Starting from the assumption that there is an intrinsic relationship between the internal and social functions of social systems, we suggest there are particular stratification determinants modifying the internal pedagogical function of education, which impact on its social function by creating simultaneous conditions of equity and differentiation. Throughout the paper this social function is considered a paradoxical mechanism. We highlight how this paradoxical dynamic is deployed in different structural levels of the educational sphere. Additionally, we discuss eventual consequences of this paradoxical social function for the inclusion possibilities that educational systems offer to individuals.
Resumo:
Summary Background: We previously derived a clinical prognostic algorithm to identify patients with pulmonary embolism (PE) who are at low-risk of short-term mortality who could be safely discharged early or treated entirely in an outpatient setting. Objectives: To externally validate the clinical prognostic algorithm in an independent patient sample. Methods: We validated the algorithm in 983 consecutive patients prospectively diagnosed with PE at an emergency department of a university hospital. Patients with none of the algorithm's 10 prognostic variables (age >/= 70 years, cancer, heart failure, chronic lung disease, chronic renal disease, cerebrovascular disease, pulse >/= 110/min., systolic blood pressure < 100 mm Hg, oxygen saturation < 90%, and altered mental status) at baseline were defined as low-risk. We compared 30-day overall mortality among low-risk patients based on the algorithm between the validation and the original derivation sample. We also assessed the rate of PE-related and bleeding-related mortality among low-risk patients. Results: Overall, the algorithm classified 16.3% of patients with PE as low-risk. Mortality at 30 days was 1.9% among low-risk patients and did not differ between the validation and the original derivation sample. Among low-risk patients, only 0.6% died from definite or possible PE, and 0% died from bleeding. Conclusions: This study validates an easy-to-use, clinical prognostic algorithm for PE that accurately identifies patients with PE who are at low-risk of short-term mortality. Low-risk patients based on our algorithm are potential candidates for less costly outpatient treatment.
Resumo:
Counterfeit pharmaceutical products have become a widespread problem in the last decade. Various analytical techniques have been applied to discriminate between genuine and counterfeit products. Among these, Near-infrared (NIR) and Raman spectroscopy provided promising results.The present study offers a methodology allowing to provide more valuable information fororganisations engaged in the fight against counterfeiting of medicines.A database was established by analyzing counterfeits of a particular pharmaceutical product using Near-infrared (NIR) and Raman spectroscopy. Unsupervised chemometric techniques (i.e. principal component analysis - PCA and hierarchical cluster analysis - HCA) were implemented to identify the classes within the datasets. Gas Chromatography coupled to Mass Spectrometry (GC-MS) and Fourier Transform Infrared Spectroscopy (FT-IR) were used to determine the number of different chemical profiles within the counterfeits. A comparison with the classes established by NIR and Raman spectroscopy allowed to evaluate the discriminating power provided by these techniques. Supervised classifiers (i.e. k-Nearest Neighbors, Partial Least Squares Discriminant Analysis, Probabilistic Neural Networks and Counterpropagation Artificial Neural Networks) were applied on the acquired NIR and Raman spectra and the results were compared to the ones provided by the unsupervised classifiers.The retained strategy for routine applications, founded on the classes identified by NIR and Raman spectroscopy, uses a classification algorithm based on distance measures and Receiver Operating Characteristics (ROC) curves. The model is able to compare the spectrum of a new counterfeit with that of previously analyzed products and to determine if a new specimen belongs to one of the existing classes, consequently allowing to establish a link with other counterfeits of the database.
Resumo:
Since the first anti-doping tests in the 1960s, the analytical aspects of the testing remain challenging. The evolution of the analytical process in doping control is discussed in this paper with a particular emphasis on separation techniques, such as gas chromatography and liquid chromatography. These approaches are improving in parallel with the requirements of increasing sensitivity and selectivity for detecting prohibited substances in biological samples from athletes. Moreover, fast analyses are mandatory to deal with the growing number of doping control samples and the short response time required during particular sport events. Recent developments in mass spectrometry and the expansion of accurate mass determination has improved anti-doping strategies with the possibility of using elemental composition and isotope patterns for structural identification. These techniques must be able to distinguish equivocally between negative and suspicious samples with no false-negative or false-positive results. Therefore, high degree of reliability must be reached for the identification of major metabolites corresponding to suspected analytes. Along with current trends in pharmaceutical industry the analysis of proteins and peptides remains an important issue in doping control. Sophisticated analytical tools are still mandatory to improve their distinction from endogenous analogs. Finally, indirect approaches will be discussed in the context of anti-doping, in which recent advances are aimed to examine the biological response of a doping agent in a holistic way.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
In applied regional analysis, statistical information is usually published at different territorial levels with the aim providing inforamtion of interest for different potential users. When using this information, there are two different choices: first, to use normative regions ( towns, provinces, etc.) or, second, to design analytical regions directly related with the analysed phenomena. In this paper, privincial time series of unemployment rates in Spain are used in order to compare the results obtained by applying yoy analytical regionalisation models ( a two stages procedure based on cluster analysis and a procedure based on mathematical programming) with the normative regions available at two different scales: NUTS II and NUTS I. The results have shown that more homogeneous regions were designed when applying both analytical regionalisation tools. Two other obtained interesting results are related with the fact that analytical regions were also more estable along time and with the effects of scales in the regionalisation process
Resumo:
The real part of the optical potential for heavy ion elastic scattering is obtained by double folding of the nuclear densities with a density-dependent nucleon-nucleon effective interaction which was successful in describing the binding, size, and nucleon separation energies in spherical nuclei. A simple analytical form is found to differ from the resulting potential considerably less than 1% all through the important region. This analytical potential is used so that only few points of the folding need to be computed. With an imaginary part of the Woods-Saxon type, this potential predicts the elastic scattering angular distribution in very good agreement with experimental data, and little renormalization (unity in most cases) is needed.