971 resultados para Scale invariant feature transform
Resumo:
The reported prevalence of late-life depressive symptoms varies widely between studies, a finding that might be attributed to cultural as well as methodological factors. The EURO-D scale was developed to allow valid comparison of prevalence and risk associations between European countries. This study used Confirmatory Factor Analysis (CFA) and Rasch models to assess whether the goal of measurement invariance had been achieved; using EURO-D scale data collected in 10 European countries as part of the Survey of Health, Ageing and Retirement in Europe (SHARE) (n = 22,777). The results suggested a two-factor solution (Affective Suffering and Motivation) after Principal Component Analysis (PCA) in 9 of the 10 countries. With CFA, in all countries, the two-factor solution had better overall goodness-of-fit than the one-factor solution. However, only the Affective Suffering subscale was equivalent across countries, while the Motivation subscale was not. The Rasch model indicated that the EURO-D was a hierarchical scale. While the calibration pattern was similar across countries, between countries agreement in item calibrations was stronger for the items loading on the affective suffering than the motivation factor. In conclusion, there is evidence to support the EURO-D as either a uni-dimensional or bi-dimensional scale measure of depressive symptoms in late-life across European countries. The Affective Suffering sub-component had more robust cross-cultural validity than the Motivation sub-component.
Resumo:
Jana stands at the podium, palms sweaty, looking out at hundreds of colleagues who are waiting to hear about her new initiative. Bill walks into a meeting after a failed product launch to greet an exhausted and demotivated team that desperately needs his direction. Robin gets ready to confront a brilliant but underperforming subordinate who needs to be put back on track. We've all been in situations like these. What they require is charisma-the ability to communicate a clear, visionary, and inspirational message that captivates and motivates an audience. In this article, we discuss how one learns to be more charismatic
Resumo:
According to the most widely accepted Cattell-Horn-Carroll (CHC) model of intelligence measurement, each subtest score of the Wechsler Intelligence Scale for Adults (3rd ed.; WAIS-III) should reflect both 1st- and 2nd-order factors (i.e., 4 or 5 broad abilities and 1 general factor). To disentangle the contribution of each factor, we applied a Schmid-Leiman orthogonalization transformation (SLT) to the standardization data published in the French technical manual for the WAIS-III. Results showed that the general factor accounted for 63% of the common variance and that the specific contributions of the 1st-order factors were weak (4.7%-15.9%). We also addressed this issue by using confirmatory factor analysis. Results indicated that the bifactor model (with 1st-order group and general factors) better fit the data than did the traditional higher order structure. Models based on the CHC framework were also tested. Results indicated that a higher order CHC model showed a better fit than did the classical 4-factor model; however, the WAIS bifactor structure was the most adequate. We recommend that users do not discount the Full Scale IQ when interpreting the index scores of the WAIS-III because the general factor accounts for the bulk of the common variance in the French WAIS-III. The 4 index scores cannot be considered to reflect only broad ability because they include a strong contribution of the general factor.
Resumo:
Trypanosomosis is the most economically important disease constraint to livestock productivity in sub-Saharan Africa and has significant negative impact in other parts of the world. Livestock are an integral component of farming systems and thus contribute significantly to food and economic security in developing countries. Current methods of control for trypanosomosis are inadequate to prevent the enormous socioeconomic losses resulting from this disease. A vaccine has been viewed as the most desirable control option. However, the complexity of the parasite's antigenic repertoire made development of a vaccine based on the variable surface glycoprotein coat unlikely. As a result, research is now focused on identifying invariant trypanosome components as potential targets for interrupting infection or infection-mediated disease. Immunosuppression appears to be a nearly universal feature of infection with African trypanosomes and thus may represent an essential element of the host-parasite relationship, possibly by reducing the host's ability to mount a protective immune response. Antibody, T cell and macrophage/monocyte responses of infected cattle are depressed in both trypanosusceptible and trypanotolerant breeds of cattle. This review describes the specific T cell and monocyte/macrophage functions that are altered in trypanosome-infected cattle and compares these disorders with those that have been described in the murine model of trypanosomosis. The identification of parasite factors that induce immunosuppression and the mechanisms that mediate depressed immune responses might suggest novel disease intervention strategies.
Resumo:
To date, there is no widely accepted clinical scale to monitor the evolution of depressive symptoms in demented patients. We assessed the sensitivity to treatment of a validated French version of the Health of the Nation Outcome Scale (HoNOS) 65+ compared to five routinely used scales. Thirty elderly inpatients with ICD-10 diagnosis of dementia and depression were evaluated at admission and discharge using paired t-test. Using the Brief Psychiatric Rating Scale (BPRS) "depressive mood" item as gold standard, a receiver operating characteristic curve (ROC) analysis assessed the validity of HoNOS65+F "depressive symptoms" item score changes. Unlike Geriatric Depression Scale, Mini Mental State Examination and Activities of Daily Living scores, BPRS scores decreased and Global Assessment Functioning Scale score increased significantly from admission to discharge. Amongst HoNOS65+F items, "behavioural disturbance", "depressive symptoms", "activities of daily life" and "drug management" items showed highly significant changes between the first and last day of hospitalization. The ROC analysis revealed that changes in the HoNOS65+F "depressive symptoms" item correctly classified 93% of the cases with good sensitivity (0.95) and specificity (0.88) values. These data suggest that the HoNOS65+F "depressive symptoms" item may provide a valid assessment of the evolution of depressive symptoms in demented patients.
Resumo:
The 'Transforming Your Care (TYC)' consultation relates to proposals for changes in the delivery of Health and Social Care in Northern Ireland in the context of the TYC report published in December 2011. TYC is about making changes to ensure safe, high quality and sustainable services for patients, service users and staff. TYC sets out proposals in respect of how health and social services will need to adapt and be organised to best meet the needs associated with population ageing, increasing long-term conditions and other challenges. Key points from IPH response include: IPH welcomes the HSC commitment to transform health and social care services to meet Northern Ireland’s changing population health needs Inequalities are a dominant feature of health service utilisation patterns in Northern Ireland – for example hospital admission rates for self-harm and alcohol-related admissions in the most deprived areas are double the regional figure. IPH recommends that
Resumo:
Rapport de synthèse Introduction : Le Glasgow coma score (GCS) est un outil reconnu permettant l'évaluation des patients après avoir subi un traumatisme crânien. Il est réputé pour sa simplicité et sa reproductibilité permettant ainsi aux soignants une évaluation appropriée et continue du status neurologique des patients. Le GCS est composé de trois catégories évaluant la réponse oculaire, verbale et motrice. En Suisse, les soins préhospitaliers aux patients victimes d'un trauma crânien sévère sont effectués par des médecins, essdntiellement à bord des hélicoptères médicalisés. Avant une anesthésie générale nécessaire à ces patients, une évaluation du GCS est essentielle indiquant au personnel hospitalier la gravité des lésions cérébrales. Afin d'évaluer la connaissance du GCS par les médecins à bord des hélicoptères médicalisés en Suisse, nous avons élaboré un questionnaire, contenant dans une première partie des questions sur les connaissances générales du GCS suivi d'un cas clinique. Objectif : Evaluation des connaissances pratiques et théoriques du GCS par les médecins travaillant à bord des hélicoptères médicalisés en Suisse. Méthode : Etude observationnelle prospective et anonymisée à l'aide d'un questionnaire. Evaluation des connaissances générales du GCS et de son utilisation clinique lors de la présentation d'un cas. Résultats : 16 des 18 bases d'hélicoptères médicalisés suisses ont participé à notre étude. 130 questionnaires ont été envoyés et le taux de réponse a été de 79.2%. Les connaissances théoriques du GCS étaient comparables pour tous les médecins indépendamment de leur niveau de formation. Des erreurs dans l'appréciation du cas clinique étaient présentes chez 36.9% des participants. 27.2% ont commis des erreurs dans le score moteur et 18.5% dans le score verbal. Les erreurs ont été répertoriées le plus fréquemment chez les médecins assistants (47.5%, p=0.09), suivi par les chefs de clinique (31.6%, p=0.67) et les médecins installés en cabinet (18.4%, p=1.00). Les médecins cadres ont fait significativement moins d'erreurs que les autres participants (0%, p<0.05). Aucune différence significative n'à été observée entre les différentes spécialités (anesthésie, médecine interne, médecine général et «autres »). Conclusion Même si les connaissances théoriques du GCS sont adéquates parmi les médecins travaillant à bord des hélicoptères médicalisés, des erreurs dans son application clinique sont présentes dans plus d'un tiers des cas. Les médecins avec le moins d'expériences professionnelle font le plus d'erreurs. Au vu de l'importance de l'évaluation correcte du score de Glasgow initial, une amélioration des connaissances est indispensable.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Let A be a simple, separable C*-algebra of stable rank one. We prove that the Cuntz semigroup of C (T, A) is determined by its Murray-von Neumann semigroup of projections and a certain semigroup of lower semicontinuous functions (with values in the Cuntz semigroup of A). This result has two consequences. First, specializing to the case that A is simple, finite, separable and Z-stable, this yields a description of the Cuntz semigroup of C (T, A) in terms of the Elliott invariant of A. Second, suitably interpreted, it shows that the Elliott functor and the functor defined by the Cuntz semigroup of the tensor product with the algebra of continuous functions on the circle are naturally equivalent.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Mammalian genomes contain highly conserved sequences that are not functionally transcribed. These sequences are single copy and comprise approximately 1-2% of the human genome. Evolutionary analysis strongly supports their functional conservation, although their potentially diverse, functional attributes remain unknown. It is likely that genomic variation in conserved non-genic sequences is associated with phenotypic variability and human disorders. So how might their function and contribution to human disorders be examined?
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.