941 resultados para Economies of scale
Resumo:
To date, there is no widely accepted clinical scale to monitor the evolution of depressive symptoms in demented patients. We assessed the sensitivity to treatment of a validated French version of the Health of the Nation Outcome Scale (HoNOS) 65+ compared to five routinely used scales. Thirty elderly inpatients with ICD-10 diagnosis of dementia and depression were evaluated at admission and discharge using paired t-test. Using the Brief Psychiatric Rating Scale (BPRS) "depressive mood" item as gold standard, a receiver operating characteristic curve (ROC) analysis assessed the validity of HoNOS65+F "depressive symptoms" item score changes. Unlike Geriatric Depression Scale, Mini Mental State Examination and Activities of Daily Living scores, BPRS scores decreased and Global Assessment Functioning Scale score increased significantly from admission to discharge. Amongst HoNOS65+F items, "behavioural disturbance", "depressive symptoms", "activities of daily life" and "drug management" items showed highly significant changes between the first and last day of hospitalization. The ROC analysis revealed that changes in the HoNOS65+F "depressive symptoms" item correctly classified 93% of the cases with good sensitivity (0.95) and specificity (0.88) values. These data suggest that the HoNOS65+F "depressive symptoms" item may provide a valid assessment of the evolution of depressive symptoms in demented patients.
Resumo:
Rapport de synthèse Introduction : Le Glasgow coma score (GCS) est un outil reconnu permettant l'évaluation des patients après avoir subi un traumatisme crânien. Il est réputé pour sa simplicité et sa reproductibilité permettant ainsi aux soignants une évaluation appropriée et continue du status neurologique des patients. Le GCS est composé de trois catégories évaluant la réponse oculaire, verbale et motrice. En Suisse, les soins préhospitaliers aux patients victimes d'un trauma crânien sévère sont effectués par des médecins, essdntiellement à bord des hélicoptères médicalisés. Avant une anesthésie générale nécessaire à ces patients, une évaluation du GCS est essentielle indiquant au personnel hospitalier la gravité des lésions cérébrales. Afin d'évaluer la connaissance du GCS par les médecins à bord des hélicoptères médicalisés en Suisse, nous avons élaboré un questionnaire, contenant dans une première partie des questions sur les connaissances générales du GCS suivi d'un cas clinique. Objectif : Evaluation des connaissances pratiques et théoriques du GCS par les médecins travaillant à bord des hélicoptères médicalisés en Suisse. Méthode : Etude observationnelle prospective et anonymisée à l'aide d'un questionnaire. Evaluation des connaissances générales du GCS et de son utilisation clinique lors de la présentation d'un cas. Résultats : 16 des 18 bases d'hélicoptères médicalisés suisses ont participé à notre étude. 130 questionnaires ont été envoyés et le taux de réponse a été de 79.2%. Les connaissances théoriques du GCS étaient comparables pour tous les médecins indépendamment de leur niveau de formation. Des erreurs dans l'appréciation du cas clinique étaient présentes chez 36.9% des participants. 27.2% ont commis des erreurs dans le score moteur et 18.5% dans le score verbal. Les erreurs ont été répertoriées le plus fréquemment chez les médecins assistants (47.5%, p=0.09), suivi par les chefs de clinique (31.6%, p=0.67) et les médecins installés en cabinet (18.4%, p=1.00). Les médecins cadres ont fait significativement moins d'erreurs que les autres participants (0%, p<0.05). Aucune différence significative n'à été observée entre les différentes spécialités (anesthésie, médecine interne, médecine général et «autres »). Conclusion Même si les connaissances théoriques du GCS sont adéquates parmi les médecins travaillant à bord des hélicoptères médicalisés, des erreurs dans son application clinique sont présentes dans plus d'un tiers des cas. Les médecins avec le moins d'expériences professionnelle font le plus d'erreurs. Au vu de l'importance de l'évaluation correcte du score de Glasgow initial, une amélioration des connaissances est indispensable.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The objective of this paper is to analyze why firms in some industries locate in specialized economic environments (localization economies) while those in other industries prefer large city locations (urbanization economies). To this end, we examine the location decisions of new manufacturing firms in Spain at the city level and for narrowly defined industries (three-digit level). First, we estimate firm location models to obtain estimates that reflect the importance of localization and urbanization economies in each industry. In a second step, we regress these estimates on industry characteristics that are related to the potential importance of three agglomeration theories, namely, labor market pooling, input sharing and knowledge spillovers. Localization effects are low and urbanization effects are high in knowledge-intensive industries, suggesting that firms (partly) locate in large cities to reap the benefits of inter-industry knowledge spillovers. We also find that localization effects are high in industries that employ workers whose skills are more industry-specific, suggesting that industries (partly) locate in specialized economic environments to share a common pool of specialized workers.
Resumo:
OBJECTIVE: To determine the psychometric properties of an adapted version of the Falls Efficacy Scale (FES) in older rehabilitation patients. DESIGN: Cross-sectional survey. SETTING: Postacute rehabilitation facility in Switzerland. PARTICIPANTS: Seventy elderly persons aged 65 years and older receiving postacute, inpatient rehabilitation. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: FES questions asked about subject's confidence (range, 0 [none]-10 [full]) in performing 12 activities of daily living (ADLs) without falling. Construct validity was assessed using correlation with measures of physical (basic ADLs [BADLs]), cognitive (Mini-Mental State Examination [MMSE]), affective (15-item Geriatric Depression Scale [GDS]), and mobility (Performance Oriented Mobility Assessment [POMA]) performance. Predictive validity was assessed using the length of rehabilitation stay as the outcome. To determine test-retest reliability, FES administration was repeated in a random subsample (n=20) within 72 hours. RESULTS: FES scores ranged from 10 to 120 (mean, 88.7+/-26.5). Internal consistency was optimal (Cronbach alpha=.90), and item-to-total correlations were all significant, ranging from .56 (toilet use) to .82 (reaching into closets). Test-retest reliability was high (intraclass correlation coefficient, .97; 95% confidence interval, .95-.99; P<.001). Subjects reporting a fall in the previous year had lower FES scores than nonfallers (85.0+/-25.2 vs 94.4+/-27.9, P=.054). The FES correlated with POMA (Spearman rho=.40, P<.001), MMSE (rho=.37, P=.001), BADL (rho=.43, P<.001), and GDS (rho=-.53, P<.001) scores. These relationships remained significant in multivariable analysis for BADLs and GDS, confirming FES construct validity. There was a significant inverse relationship between FES score and the length of rehabilitation stay, independent of sociodemographic, functional, cognitive, and fall status. CONCLUSIONS: This adapted FES is reliable and valid in older patients undergoing postacute rehabilitation. The independent association between poor falls efficacy and increased length of stay has not been previously described and needs further investigations.
Resumo:
BACKGROUND: Accurate catalogs of structural variants (SVs) in mammalian genomes are necessary to elucidate the potential mechanisms that drive SV formation and to assess their functional impact. Next generation sequencing methods for SV detection are an advance on array-based methods, but are almost exclusively limited to four basic types: deletions, insertions, inversions and copy number gains. RESULTS: By visual inspection of 100 Mbp of genome to which next generation sequence data from 17 inbred mouse strains had been aligned, we identify and interpret 21 paired-end mapping patterns, which we validate by PCR. These paired-end mapping patterns reveal a greater diversity and complexity in SVs than previously recognized. In addition, Sanger-based sequence analysis of 4,176 breakpoints at 261 SV sites reveal additional complexity at approximately a quarter of structural variants analyzed. We find micro-deletions and micro-insertions at SV breakpoints, ranging from 1 to 107 bp, and SNPs that extend breakpoint micro-homology and may catalyze SV formation. CONCLUSIONS: An integrative approach using experimental analyses to train computational SV calling is essential for the accurate resolution of the architecture of SVs. We find considerable complexity in SV formation; about a quarter of SVs in the mouse are composed of a complex mixture of deletion, insertion, inversion and copy number gain. Computational methods can be adapted to identify most paired-end mapping patterns.
Resumo:
A proposal to pilot nursing assessment of self harm in Accident and Emergency Departments (A&E) was developed by key stakeholders in nurse education and suicide prevention in the South East and submitted to the National Council for the Professional Development of Nursing and Midwifery in April 2002.The proposal included the introduction of a suicide intent scale. Following an initial training programme, a suicide intent scale was utilised by nursing staff in A&E and the Medical Assessment Unit (MAU),Wexford General Hospital and evaluated over a period of nine months. Four months into the study the National Suicide Research Foundation (NSRF) was invited to collaboratively prepare a successful submission to the Health Research Board (HRB) as part of ‘Building Partnerships for a Healthier Future Research Awards 2004’. The NSRF undertook independent scientific evaluation of the outcomes of the suicide awareness programme. The study is in line with priorities determined by Reach Out, the National Strategy for Action on Suicide Prevention 2005-2014 (HSE, 2005) and the HSE-South East Suicide Prevention Programme through raising nursing staff awareness of the public health issue of suicide/deliberate self harm and by improving the efficiency and quality of nursing services offered to persons who present to acute hospitals with deliberate self harm. The study findings indicate evidence to positively support nursing assessment of DSH using a suicide intent scale in terms of assessing behavioural characteristics of individual clients and their suicide risk. Enhanced confidence levels of nursing personnel in caring for suicidal clients was demonstrated by staff who participated in an education programme related to risk assessment and specifically the use of a suicide intent scale.This resource was contributed by The National Documentation Centre on Drug Use.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
The purpose of this study was to evaluate the factor structure and the reliability of the French versions of the Identity Style Inventory (ISI-3) and the Utrecht-Management of Identity Commitments Scale (U-MICS) in a sample of college students (N = 457, 18 to 25 years old). Confirmatory factor analyses confirmed the hypothesized three-factor solution of the ISI-3 identity styles (i.e. informational, normative, and diffuse-avoidant styles), the one-factor solution of the ISI-3 identity commitment, and the three-factor structure of the U-MICS (i.e. commitment, in-depth exploration, and reconsideration of commitment). Additionally, theoretically consistent and meaningful associations among the ISI-3, U-MICS, and Ego Identity Process Questionnaire (EIPQ) confirmed convergent validity. Overall, the results of the present study indicate that the French versions of the ISI-3 and UMICS are useful instruments for assessing identity styles and processes, and provide additional support to the cross-cultural validity of these tools.
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
Studying patterns of species distributions along elevation gradients is frequently used to identify the primary factors that determine the distribution, diversity and assembly of species. However, despite their crucial role in ecosystem functioning, our understanding of the distribution of below-ground fungi is still limited, calling for more comprehensive studies of fungal biogeography along environmental gradients at various scales (from regional to global). Here, we investigated the richness of taxa of soil fungi and their phylogenetic diversity across a wide range of grassland types along a 2800 m elevation gradient at a large number of sites (213), stratified across a region of the Western Swiss Alps (700 km(2)). We used 454 pyrosequencing to obtain fungal sequences that were clustered into operational taxonomic units (OTUs). The OTU diversity-area relationship revealed uneven distribution of fungal taxa across the study area (i.e. not all taxa are everywhere) and fine-scale spatial clustering. Fungal richness and phylogenetic diversity were found to be higher in lower temperatures and higher moisture conditions. Climatic and soil characteristics as well as plant community composition were related to OTU alpha, beta and phylogenetic diversity, with distinct fungal lineages suggesting distinct ecological tolerances. Soil fungi, thus, show lineage-specific biogeographic patterns, even at a regional scale, and follow environmental determinism, mediated by interactions with plants.