35 resultados para Compositional data analysis-roots in geosciences
Resumo:
BACKGROUND: Data for trends in glycaemia and diabetes prevalence are needed to understand the effects of diet and lifestyle within populations, assess the performance of interventions, and plan health services. No consistent and comparable global analysis of trends has been done. We estimated trends and their uncertainties in mean fasting plasma glucose (FPG) and diabetes prevalence for adults aged 25 years and older in 199 countries and territories. METHODS: We obtained data from health examination surveys and epidemiological studies (370 country-years and 2·7 million participants). We converted systematically between different glycaemic metrics. For each sex, we used a Bayesian hierarchical model to estimate mean FPG and its uncertainty by age, country, and year, accounting for whether a study was nationally, subnationally, or community representative. FINDINGS: In 2008, global age-standardised mean FPG was 5·50 mmol/L (95% uncertainty interval 5·37-5·63) for men and 5·42 mmol/L (5·29-5·54) for women, having risen by 0·07 mmol/L and 0·09 mmol/L per decade, respectively. Age-standardised adult diabetes prevalence was 9·8% (8·6-11·2) in men and 9·2% (8·0-10·5) in women in 2008, up from 8·3% (6·5-10·4) and 7·5% (5·8-9·6) in 1980. The number of people with diabetes increased from 153 (127-182) million in 1980, to 347 (314-382) million in 2008. We recorded almost no change in mean FPG in east and southeast Asia and central and eastern Europe. Oceania had the largest rise, and the highest mean FPG (6·09 mmol/L, 5·73-6·49 for men; 6·08 mmol/L, 5·72-6·46 for women) and diabetes prevalence (15·5%, 11·6-20·1 for men; and 15·9%, 12·1-20·5 for women) in 2008. Mean FPG and diabetes prevalence in 2008 were also high in south Asia, Latin America and the Caribbean, and central Asia, north Africa, and the Middle East. Mean FPG in 2008 was lowest in sub-Saharan Africa, east and southeast Asia, and high-income Asia-Pacific. In high-income subregions, western Europe had the smallest rise, 0·07 mmol/L per decade for men and 0·03 mmol/L per decade for women; North America had the largest rise, 0·18 mmol/L per decade for men and 0·14 mmol/L per decade for women. INTERPRETATION: Glycaemia and diabetes are rising globally, driven both by population growth and ageing and by increasing age-specific prevalences. Effective preventive interventions are needed, and health systems should prepare to detect and manage diabetes and its sequelae. FUNDING: Bill & Melinda Gates Foundation and WHO.
Resumo:
Analyzing functional data often leads to finding common factors, for which functional principal component analysis proves to be a useful tool to summarize and characterize the random variation in a function space. The representation in terms of eigenfunctions is optimal in the sense of L-2 approximation. However, the eigenfunctions are not always directed towards an interesting and interpretable direction in the context of functional data and thus could obscure the underlying structure. To overcome such difficulty, an alternative to functional principal component analysis is proposed that produces directed components which may be more informative and easier to interpret. These structural components are similar to principal components, but are adapted to situations in which the domain of the function may be decomposed into disjoint intervals such that there is effectively independence between intervals and positive correlation within intervals. The approach is demonstrated with synthetic examples as well as real data. Properties for special cases are also studied.
Resumo:
OBJECTIVE: An inverse relationship between blood pressure (BP) and cognitive function has been found in adults, but limited data are available in adolescents and young adults. We examined the prospective relation between BP and cognitive function in adolescence. METHODS: We examined the association between BP measured at the ages of 12-15 years in school surveys and cognitive endpoints measured in the Seychelles Child Development Study at ages 17 (n = 407) and 19 (n = 429) years, respectively. We evaluated multiple domains of cognition based on subtests of the Cambridge Neurological Test Automated Battery (CANTAB), the Woodcock Johnson Test of Scholastic Achievement (WJTA), the Finger Tapping test (FT) and the Kaufman Brief Intelligence Test (K-BIT). We used age, sex and height-specific z-scores of SBP, DBP and mean arterial pressure (MAP). RESULTS: Six out of the 21 cognitive endpoints tested were associated with BP. However, none of these associations were found to hold for both males and females or for different subtests within the same neurodevelopmental domain or for both SBP and DBP. Most of these associations disappeared when analyses were adjusted for selected potential confounding factors such as socio-economic status, birth weight, gestational age, BMI, alcohol consumption, blood glucose, and total n-3 and n-6 polyunsaturated fats. CONCLUSIONS: Our findings do not support a consistent association between BP and subsequent performance on tests assessing various cognitive domains in adolescents.
Resumo:
Through significant developments and progresses in the last two decades, in vivo localized nuclear magnetic resonance spectroscopy (MRS) became a method of choice to probe brain metabolic pathways in a non-invasive way. Beside the measurement of the total concentration of more than 20 metabolites, (1)H MRS can be used to quantify the dynamics of substrate transport across the blood-brain barrier by varying the plasma substrate level. On the other hand, (13)C MRS with the infusion of (13)C-enriched substrates enables the characterization of brain oxidative metabolism and neurotransmission by incorporation of (13)C in the different carbon positions of amino acid neurotransmitters. The quantitative determination of the biochemical reactions involved in these processes requires the use of appropriate metabolic models, whose level of details is strongly related to the amount of data accessible with in vivo MRS. In the present work, we present the different steps involved in the elaboration of a mathematical model of a given brain metabolic process and its application to the experimental data in order to extract quantitative brain metabolic rates. We review the recent advances in the localized measurement of brain glucose transport and compartmentalized brain energy metabolism, and how these reveal mechanistic details on glial support to glutamatergic and GABAergic neurons.
Resumo:
PHO1 has been recently identified as a protein involved in the loading of inorganic phosphate into the xylem of roots in Arabidopsis. The genome of Arabidopsis contains 11 members of the PHO1 gene family. The cDNAs of all PHO1 homologs have been cloned and sequenced. All proteins have the same topology and harbor a SPX tripartite domain in the N-terminal hydrophilic portion and an EXS domain in the C-terminal hydrophobic portion. The SPX and EXS domains have been identified in yeast (Saccharomyces cerevisiae) proteins involved in either phosphate transport or sensing or in sorting proteins to endomembranes. The Arabidopsis genome contains additional proteins of unknown function containing either a SPX or an EXS domain. Phylogenetic analysis indicated that the PHO1 family is subdivided into at least three clusters. Reverse transcription-PCR revealed a broad pattern of expression in leaves, roots, stems, and flowers for most genes, although two genes are expressed exclusively in flowers. Analysis of the activity of the promoter of all PHO1 homologs using promoter-beta-glucuronidase fusions revealed a predominant expression in the vascular tissues of roots, leaves, stems, or flowers. beta-Glucuronidase expression is also detected for several promoters in nonvascular tissue, including hydathodes, trichomes, root tip, root cortical/epidermal cells, and pollen grains. The expression pattern of PHO1 homologs indicates a likely role of the PHO1 proteins not only in the transfer of phosphate to the vascular cylinder of various tissues but also in the acquisition of phosphate into cells, such as pollen or root epidermal/cortical cells.
Resumo:
Objective: An inverse relationship between blood pressure and cognitive function has been found in adults, but limited data are available in adolescents and young adults. We prospectively examined the relation between blood pressure and cognitive function in adolescence. Methods: We examined the association between BP measured at the ages of 12-15 years in school surveys and cognitive endpoints measured in the Seychelles Child Development Study at ages 17 (n=407) and 19 (n=429) years respectively. We evaluated multiple domains of cognition based on subtests of the Cambridge Neurological Test Automated Battery (CANTAB), the Woodcock Johnson Test of Scholastic Achievement (WJTA), the Finger Tapping test (FT) and the Kaufman Brief Intelligence Test (K-BIT). We used age-, sex- and height-specific z-scores of systolic blood pressure (SBP), diastolic blood pressure (DBP) and mean arterial pressure (MAP). Results: Six out of the 21 cognitive endpoints tested were associated with BP. However, none of these associations were found to hold for both males and females or for different subtests within the same neurodevelopmental domain or for both SBP and DBP. Most of these associations disappeared when analyses were adjusted for selected potential confounding factors such as socio-economic status, birth weight, gestational age, body mass index, alcohol consumption, blood glucose, and total n-3 and n-6 polyunsaturated fats. Conclusions: Our findings do not support a consistent association between BP and subsequent performance on tests assessing various cognitive domains in adolescents.
Resumo:
Background and aim of the study: Genomic gains and losses play a crucial role in the development and progression of DLBCL and are closely related to gene expression profiles (GEP), including the germinal center B-cell like (GCB) and activated B-cell like (ABC) cell of origin (COO) molecular signatures. To identify new oncogenes or tumor suppressor genes (TSG) involved in DLBCL pathogenesis and to determine their prognostic values, an integrated analysis of high-resolution gene expression and copy number profiling was performed. Patients and methods: Two hundred and eight adult patients with de novo CD20+ DLBCL enrolled in the prospective multicentric randomized LNH-03 GELA trials (LNH03-1B, -2B, -3B, 39B, -5B, -6B, -7B) with available frozen tumour samples, centralized reviewing and adequate DNA/RNA quality were selected. 116 patients were treated by Rituximab(R)-CHOP/R-miniCHOP and 92 patients were treated by the high dose (R)-ACVBP regimen dedicated to patients younger than 60 years (y) in frontline. Tumour samples were simultaneously analysed by high resolution comparative genomic hybridization (CGH, Agilent, 144K) and gene expression arrays (Affymetrix, U133+2). Minimal common regions (MCR), as defined by segments that affect the same chromosomal region in different cases, were delineated. Gene expression and MCR data sets were merged using Gene expression and dosage integrator algorithm (GEDI, Lenz et al. PNAS 2008) to identify new potential driver genes. Results: A total of 1363 recurrent (defined by a penetrance > 5%) MCRs within the DLBCL data set, ranging in size from 386 bp, affecting a single gene, to more than 24 Mb were identified by CGH. Of these MCRs, 756 (55%) showed a significant association with gene expression: 396 (59%) gains, 354 (52%) single-copy deletions, and 6 (67%) homozygous deletions. By this integrated approach, in addition to previously reported genes (CDKN2A/2B, PTEN, DLEU2, TNFAIP3, B2M, CD58, TNFRSF14, FOXP1, REL...), several genes targeted by gene copy abnormalities with a dosage effect and potential physiopathological impact were identified, including genes with TSG activity involved in cell cycle (HACE1, CDKN2C) immune response (CD68, CD177, CD70, TNFSF9, IRAK2), DNA integrity (XRCC2, BRCA1, NCOR1, NF1, FHIT) or oncogenic functions (CD79b, PTPRT, MALT1, AUTS2, MCL1, PTTG1...) with distinct distribution according to COO signature. The CDKN2A/2B tumor suppressor locus (9p21) was deleted homozygously in 27% of cases and hemizygously in 9% of cases. Biallelic loss was observed in 49% of ABC DLBCL and in 10% of GCB DLBCL. This deletion was strongly correlated to age and associated to a limited number of additional genetic abnormalities including trisomy 3, 18 and short gains/losses of Chr. 1, 2, 19 regions (FDR < 0.01), allowing to identify genes that may have synergistic effects with CDKN2A/2B inactivation. With a median follow-up of 42.9 months, only CDKN2A/2B biallelic deletion strongly correlates (FDR p.value < 0.01) to a poor outcome in the entire cohort (4y PFS = 44% [32-61] respectively vs. 74% [66-82] for patients in germline configuration; 4y OS = 53% [39-72] vs 83% [76-90]). In a Cox proportional hazard prediction of the PFS, CDKN2A/2B deletion remains predictive (HR = 1.9 [1.1-3.2], p = 0.02) when combined with IPI (HR = 2.4 [1.4-4.1], p = 0.001) and GCB status (HR = 1.3 [0.8-2.3], p = 0.31). This difference remains predictive in the subgroup of patients treated by R-CHOP (4y PFS = 43% [29-63] vs. 66% [55-78], p=0.02), in patients treated by R-ACVBP (4y PFS = 49% [28-84] vs. 83% [74-92], p=0.003), and in GCB (4y PFS = 50% [27-93] vs. 81% [73-90], p=0.02), or ABC/unclassified (5y PFS = 42% [28-61] vs. 67% [55-82] p = 0.009) molecular subtypes (Figure 1). Conclusion: We report for the first time an integrated genetic analysis of a large cohort of DLBCL patients included in a prospective multicentric clinical trial program allowing identifying new potential driver genes with pathogenic impact. However CDKN2A/2B deletion constitutes the strongest and unique prognostic factor of chemoresistance to R-CHOP, regardless the COO signature, which is not overcome by a more intensified immunochemotherapy. Patients displaying this frequent genomic abnormality warrant new and dedicated therapeutic approaches.
Resumo:
OBJECTIVE: The measurement of cardiac output is a key element in the assessment of cardiac function. Recently, a pulse contour analysis-based device without need for calibration became available (FloTrac/Vigileo, Edwards Lifescience, Irvine, CA). This study was conducted to determine if there is an impact of the arterial catheter site and to investigate the accuracy of this system when compared with the pulmonary artery catheter using the bolus thermodilution technique (PAC). DESIGN: Prospective study. SETTING: The operating room of 1 university hospital. PARTICIPANTS: Twenty patients undergoing cardiac surgery. INTERVENTIONS: CO was determined in parallel by the use of the Flotrac/Vigileo systems in the radial and femoral position (CO_rad and CO_fem) and by PAC as the reference method. Data triplets were recorded at defined time points. The primary endpoint was the comparison of CO_rad and CO_fem, and the secondary endpoint was the comparison with the PAC. MEASUREMENTS AND MAIN RESULTS: Seventy-eight simultaneous data recordings were obtained. The Bland-Altman analysis for CO_fem and CO_rad showed a bias of 0.46 L/min, precision was 0.85 L/min, and the percentage error was 34%. The Bland-Altman analysis for CO_rad and PAC showed a bias of -0.35 L/min, the precision was 1.88 L/min, and the percentage error was 76%. The Bland-Altman analysis for CO_fem and PAC showed a bias of 0.11 L/min, the precision was 1.8 L/min, and the percentage error was 69%. CONCLUSION: The FloTrac/Vigileo system was shown to not produce exactly the same CO data when used in radial and femoral arteries, even though the percentage error was close to the clinically acceptable range. Thus, the impact of the introduction site of the arterial catheter is not negligible. The agreement with thermodilution was low.
Resumo:
Whether for investigative or intelligence aims, crime analysts often face up the necessity to analyse the spatiotemporal distribution of crimes or traces left by suspects. This article presents a visualisation methodology supporting recurrent practical analytical tasks such as the detection of crime series or the analysis of traces left by digital devices like mobile phone or GPS devices. The proposed approach has led to the development of a dedicated tool that has proven its effectiveness in real inquiries and intelligence practices. It supports a more fluent visual analysis of the collected data and may provide critical clues to support police operations as exemplified by the presented case studies.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
The use of synthetic combinatorial peptide libraries in positional scanning format (PS-SCL) has emerged recently as an alternative approach for the identification of peptides recognized by T lymphocytes. The choice of both the PS-SCL used for screening experiments and the method used for data analysis are crucial for implementing this approach. With this aim, we tested the recognition of different PS-SCL by a tyrosinase 368-376-specific CTL clone and analyzed the data obtained with a recently developed biometric data analysis based on a model of independent and additive contribution of individual amino acids to peptide antigen recognition. Mixtures defined with amino acids present at the corresponding positions in the native sequence were among the most active for all of the libraries. Somewhat surprisingly, a higher number of native amino acids were identifiable by using amidated COOH-terminal rather than free COOH-terminal PS-SCL. Also, our data clearly indicate that when using PS-SCL longer than optimal, frame shifts occur frequently and should be taken into account. Biometric analysis of the data obtained with the amidated COOH-terminal nonapeptide library allowed the identification of the native ligand as the sequence with the highest score in a public human protein database. However, the adequacy of the PS-SCL data for the identification for the peptide ligand varied depending on the PS-SCL used. Altogether these results provide insight into the potential of PS-SCL for the identification of CTL-defined tumor-derived antigenic sequences and may significantly implement our ability to interpret the results of these analyses.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
The focus of my PhD research was the concept of modularity. In the last 15 years, modularity has become a classic term in different fields of biology. On the conceptual level, a module is a set of interacting elements that remain mostly independent from the elements outside of the module. I used modular analysis techniques to study gene expression evolution in vertebrates. In particular, I identified ``natural'' modules of gene expression in mouse and human, and I showed that expression of organ-specific and system-specific genes tends to be conserved between such distance vertebrates as mammals and fishes. Also with a modular approach, I studied patterns of developmental constraints on transcriptome evolution. I showed that none of the two commonly accepted models of the evolution of embryonic development (``evo-devo'') are exclusively valid. In particular, I found that the conservation of the sequences of regulatory regions is highest during mid-development of zebrafish, and thus it supports the ``hourglass model''. In contrast, events of gene duplication and new gene introduction are most rare in early development, which supports the ``early conservation model''. In addition to the biological insights on transcriptome evolution, I have also discussed in detail the advantages of modular approaches in large-scale data analysis. Moreover, I re-analyzed several studies (published in high-ranking journals), and showed that their conclusions do not hold out under a detailed analysis. This demonstrates that complex analysis of high-throughput data requires a co-operation between biologists, bioinformaticians, and statisticians.
Resumo:
INTRODUCTION: infants hospitalised in neonatology are inevitably exposed to pain repeatedly. Premature infants are particularly vulnerable, because they are hypersensitive to pain and demonstrate diminished behavioural responses to pain. They are therefore at risk of developing short and long-term complications if pain remains untreated. CONTEXT: compared to acute pain, there is limited evidence in the literature on prolonged pain in infants. However, the prevalence is reported between 20 and 40 %. OBJECTIVE : this single case study aimed to identify the bio-contextual characteristics of neonates who experienced prolonged pain. METHODS : this study was carried out in the neonatal unit of a tertiary referral centre in Western Switzerland. A retrospective data analysis of seven infants' profile, who experienced prolonged pain ,was performed using five different data sources. RESULTS : the mean gestational age of the seven infants was 32weeks. The main diagnosis included prematurity and respiratory distress syndrome. The total observations (N=55) showed that the participants had in average 21.8 (SD 6.9) painful procedures that were estimated to be of moderate to severe intensity each day. Out of the 164 recorded pain scores (2.9 pain assessment/day/infant), 14.6 % confirmed acute pain. Out of those experiencing acute pain, analgesia was given in 16.6 % of them and 79.1 % received no analgesia. CONCLUSION: this study highlighted the difficulty in managing pain in neonates who are exposed to numerous painful procedures. Pain in this population remains underevaluated and as a result undertreated.Results of this study showed that nursing documentation related to pain assessment is not systematic.Regular assessment and documentation of acute and prolonged pain are recommended. This could be achieved with clear guidelines on the Assessment Intervention Reassessment (AIR) cyclewith validated measures adapted to neonates. The adequacy of pain assessment is a pre-requisite for appropriate pain relief in neonates.
Resumo:
We propose a multivariate approach to the study of geographic species distribution which does not require absence data. Building on Hutchinson's concept of the ecological niche, this factor analysis compares, in the multidimensional space of ecological variables, the distribution of the localities where the focal species was observed to a reference set describing the whole study area. The first factor extracted maximizes the marginality of the focal species, defined as the ecological distance between the species optimum and the mean habitat within the reference area. The other factors maximize the specialization of this focal species, defined as the ratio of the ecological variance in mean habitat to that observed for the focal species. Eigenvectors and eigenvalues are readily interpreted and can be used to build habitat-suitability maps. This approach is recommended in Situations where absence data are not available (many data banks), unreliable (most cryptic or rare species), or meaningless (invaders). We provide an illustration and validation of the method for the alpine ibex, a species reintroduced in Switzerland which presumably has not yet recolonized its entire range.