803 resultados para Sample algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stool is chemically complex and the extraction of DNA from stool samples is extremely difficult. Haemoglobin breakdown products, such as bilirubin, bile acids and mineral ions, that are present in the stool samples, can inhibit DNA amplification and cause molecular assays to produce false-negative results. Therefore, stool storage conditions are highly important for the diagnosis of intestinal parasites and other microorganisms through molecular approaches. In the current study, stool samples that were positive for Giardia intestinalis were collected from five different patients. Each sample was stored using one out of six different storage conditions [room temperature (RT), +4ºC, -20ºC, 70% alcohol, 10% formaldehyde or 2.5% potassium dichromate] for DNA extraction procedures at one, two, three and four weeks. A modified QIAamp Stool Mini Kit procedure was used to isolate the DNA from stored samples. After DNA isolation, polymerase chain reaction (PCR) amplification was performed using primers that target the β-giardin gene. A G. intestinalis-specific 384 bp band was obtained from all of the cyst-containing stool samples that were stored at RT, +4ºC and -20ºC and in 70% alcohol and 2.5% potassium dichromate; however, this band was not produced by samples that had been stored in 10% formaldehyde. Moreover, for the stool samples containing trophozoites, the same G. intestinalis-specific band was only obtained from the samples that were stored in 2.5% potassium dichromate for up to one month. As a result, it appears evident that the most suitable storage condition for stool samples to permit the isolation of G. intestinalis DNA is in 2.5% potassium dichromate; under these conditions, stool samples may be stored for one month.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this project a research both in finding predictors via clustering techniques and in reviewing the Data Mining free software is achieved. The research is based in a case of study, from where additionally to the KDD free software used by the scientific community; a new free tool for pre-processing the data is presented. The predictors are intended for the e-learning domain as the data from where these predictors have to be inferred are student qualifications from different e-learning environments. Through our case of study not only clustering algorithms are tested but also additional goals are proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

HEMOLIA (a project under European community’s 7th framework programme) is a new generation Anti-Money Laundering (AML) intelligent multi-agent alert and investigation system which in addition to the traditional financial data makes extensive use of modern society’s huge telecom data source, thereby opening up a new dimension of capabilities to all Money Laundering fighters (FIUs, LEAs) and Financial Institutes (Banks, Insurance Companies, etc.). This Master-Thesis project is done at AIA, one of the partners for the HEMOLIA project in Barcelona. The objective of this thesis is to find the clusters in a network drawn by using the financial data. An extensive literature survey has been carried out and several standard algorithms related to networks have been studied and implemented. The clustering problem is a NP-hard problem and several algorithms like K-Means and Hierarchical clustering are being implemented for studying several problems relating to sociology, evolution, anthropology etc. However, these algorithms have certain drawbacks which make them very difficult to implement. The thesis suggests (a) a possible improvement to the K-Means algorithm, (b) a novel approach to the clustering problem using the Genetic Algorithms and (c) a new algorithm for finding the cluster of a node using the Genetic Algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last few years, many researchers have studied the presence of common dimensions of temperament in subjects with symptoms of anxiety. The aim of this study is to examine the association between temperamental dimensions (high negative affect and activity level) and anxiety problems in clinicalpreschool children. A total of 38 children, ages 3 to 6 years, from the Infant and Adolescent Mental Health Center of Girona and the Center of Diagnosis and Early Attention of Sabadell and Olot were evaluated by parents and psychologists. Their parents completed several screening scales and, subsequently, clinical child psychopathology professionals carried out diagnostic interviews with children from the sample who presented signs of anxiety. Findings showed that children with high levels of negative affect and low activity level have pronounced symptoms of anxiety. However, children with anxiety disorders do not present different temperament styles from their peers without these pathologies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a multicast implementation based on adaptive routing with anticipated calculation. Three different cost measures for a point-to-multipoint connection: bandwidth cost, connection establishment cost and switching cost can be considered. The application of the method based on pre-evaluated routing tables makes possible the reduction of bandwidth cost and connection establishment cost individually

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural resistance-associated macrophage protein 1/solute carrier family 11 member 1 gene (Nramp1/Slc11a1) is a gene that controls the susceptibility of inbred mice to intracellular pathogens. Polymorphisms in the human Slc11a1/Nramp1 gene have been associated with host susceptibility to leprosy. This study has evaluated nine polymorphisms of the Slc11a1/Nramp1 gene [(GT)n, 274C/T, 469+14G/C, 577-18G/A, 823C/T, 1029 C/T, 1465-85G/A, 1703G/A, and 1729+55del4] in 86 leprosy patients (67 and 19 patients had the multibacillary and the paucibacillary clinical forms of the disease, respectively), and 239 healthy controls matched by age, gender, and ethnicity. The frequency of allele 2 of the (GT)n polymorphism was higher in leprosy patients [p = 0.04, odds ratio (OR) = 1.49], whereas the frequency of allele 3 was higher in the control group (p = 0.03; OR = 0.66). Patients carrying the 274T allele (p = 0.04; OR = 1.49) and TT homozygosis (p = 0.02; OR = 2.46), such as the 469+14C allele (p = 0.03; OR = 1.53) of the 274C/T and 469+14G/C polymorphisms, respectively, were more frequent in the leprosy group. The leprosy and control groups had similar frequency of the 577-18G/A, 823C/T, 1029C/T, 1465-85G/A, 1703G/A, and 1729+55del4 polymorphisms. The 274C/T polymorphism in exon 3 and the 469+14G/C polymorphism in intron 4 were associated with susceptibility to leprosy, while the allele 2 and 3 of the (GT)n polymorphism in the promoter region were associated with susceptibility and protection to leprosy, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Miralls deformables més i més grans, amb cada cop més actuadors estan sent utilitzats actualment en aplicacions d'òptica adaptativa. El control dels miralls amb centenars d'actuadors és un tema de gran interès, ja que les tècniques de control clàssiques basades en la seudoinversa de la matriu de control del sistema es tornen massa lentes quan es tracta de matrius de dimensions tan grans. En aquesta tesi doctoral es proposa un mètode per l'acceleració i la paral.lelitzacó dels algoritmes de control d'aquests miralls, a través de l'aplicació d'una tècnica de control basada en la reducció a zero del components més petits de la matriu de control (sparsification), seguida de l'optimització de l'ordenació dels accionadors de comandament atenent d'acord a la forma de la matriu, i finalment de la seva posterior divisió en petits blocs tridiagonals. Aquests blocs són molt més petits i més fàcils de fer servir en els càlculs, el que permet velocitats de càlcul molt superiors per l'eliminació dels components nuls en la matriu de control. A més, aquest enfocament permet la paral.lelització del càlcul, donant una com0onent de velocitat addicional al sistema. Fins i tot sense paral. lelització, s'ha obtingut un augment de gairebé un 40% de la velocitat de convergència dels miralls amb només 37 actuadors, mitjançant la tècnica proposada. Per validar això, s'ha implementat un muntatge experimental nou complet , que inclou un modulador de fase programable per a la generació de turbulència mitjançant pantalles de fase, i s'ha desenvolupat un model complert del bucle de control per investigar el rendiment de l'algorisme proposat. Els resultats, tant en la simulació com experimentalment, mostren l'equivalència total en els valors de desviació després de la compensació dels diferents tipus d'aberracions per als diferents algoritmes utilitzats, encara que el mètode proposat aquí permet una càrrega computacional molt menor. El procediment s'espera que sigui molt exitós quan s'aplica a miralls molt grans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) experiments are widely used to determine, within entire genomes, the occupancy sites of any protein of interest, including, for example, transcription factors, RNA polymerases, or histones with or without various modifications. In addition to allowing the determination of occupancy sites within one cell type and under one condition, this method allows, in principle, the establishment and comparison of occupancy maps in various cell types, tissues, and conditions. Such comparisons require, however, that samples be normalized. Widely used normalization methods that include a quantile normalization step perform well when factor occupancy varies at a subset of sites, but may miss uniform genome-wide increases or decreases in site occupancy. We describe a spike adjustment procedure (SAP) that, unlike commonly used normalization methods intervening at the analysis stage, entails an experimental step prior to immunoprecipitation. A constant, low amount from a single batch of chromatin of a foreign genome is added to the experimental chromatin. This "spike" chromatin then serves as an internal control to which the experimental signals can be adjusted. We show that the method improves similarity between replicates and reveals biological differences including global and largely uniform changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction The European Foundation for the improvement of living and working conditions conducts a survey every 5 years since 1990. The foundation also offers the possibility to non-EU countries to be included in the survey: in 2005, Switzerland took part for the first time in the fourth edition of this survey. The Institute for Work and Health (IST) has been associated to the Swiss project conducted under the leadership of the SECO and the Fachhochschule Nordwestschweiz. The survey covers different aspects of work like job characteristics and employment conditions, health and safety, work organization, learning and development opportunities, and the balance between working and non-working life (Parent-Thirion, Fernandez Macias, Hurley, & Vermeylen, 2007). More particularly, one question assesses the worker's self-perception of the effects of work on health. We identified (for the Swiss sample) several factors affecting the risk to report health problems caused by work. The Swiss sample includes 1040 respondents. Selection of participants was based on a random multi-stage sampling and was carried out by M.I.S Trend S.A. (Lausanne). Participation rate was 59%. The database was weighted by household size, gender, age, region of domicile, occupational group, and economic sector. Specially trained interviewers carried out the interviews at the respondents home. The survey was carriedout between the 19th of September 2005 and the 30th of November 2005. As detailed in (Graf et al., 2007), 31% of the Swiss respondents identify work as the cause of health problems they experience. Most frequently reported health problems include back pain (18%), stress (17%), muscle pain (13%), and overall fatigue (11%). Ergonomic aspects associated with higher risk of reporting health problems caused by work include frequent awkward postures (odds ratio [OR] 4.7, 95% confidence interval [CI] 3.1 to 5.4), tasks involving lifting heavy loads (OR 2.7, 95% CI 2.0 to 3.6) or lifting people (OR 2.2, 95% CI 1.4 to 3.5), standing or walking (OR 1.4, 95% CI 1.1 to 1.9), as well as repetitive movements (OR 1.7, 95% CI 1.3 to 2.3). These results highlight the need to continue and intensify the prevention of work related health problems in occupations characterized by risk factors related to ergonomics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: Gender differences in psychotic disorder have been observed in terms of illness onset and course; however, past research has been limited by inconsistencies between studies and the lack of epidemiological representative of samples assessed. Thus, the aim of this study was to elucidate gender differences in a treated epidemiological sample of patients with first episode psychosis (FEP). METHODS: A medical file audit was used to collect data on premorbid, entry, treatment and 18-month outcome characteristics of 661 FEP consecutive patients treated at the Early Psychosis Prevention and Intervention Centre (EPPIC), Melbourne, Australia. RESULTS: Prior to onset of psychosis, females were more likely to have a history of suicide attempts (p=.011) and depression (p=.001). At service entry, females were more likely to have depressive symptoms (p=.007). Conversely, males had marked substance use problems that were evident prior to admission (p<.001) and persisted through treatment (p<.001). At service entry, males also experienced more severe psychopathology (p<.001) and lower levels of functioning (GAF, p=.008; unemployment/not studying p=.004; living with family, p=.003). Treatment non-compliance (p<.001) and frequent hospitalisations (p=.047) were also common for males with FEP. At service discharge males had significantly lower levels of functioning (GAF, p=.008; unemployment/not studying p=.040; living with family, p=.001) compared to females with FEP. CONCLUSIONS: Gender differences are evident in illness course of patients with FEP, particularly with respect to past history of psychopathology and functioning at presentation and at service discharge. Strategies to deal with these gender differences need to be considered in early intervention programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Self‐selection into treatment and self‐selection into the sample are major concerns of VAA research and need to be controlled for if the aim is to deduce causal effects from VAA use in observational data. This paper focuses on the methodological aspects of VAA research and outlines omnipresent endogeneity issues, partly imposed through unobserved factors that affect both whether individuals chose to use VAAs and their electoral behavior. We promote using Heckman selection models and apply various versions of the model to data from the Swiss electorate and smartvote users in order to see to what extent selection biases interfere with the estimated effects of interest.