939 resultados para non-parametric estimation
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
The objective of this study is to analyse the technical or productive efficiency ofthe refuse collection services in 75 municipalities located in the Spanish regionof Catalonia. The analysis has been carried out using various techniques. Firstly we have calculated a deterministic parametric frontier, then a stochastic parametric frontier, and finally, various non-parametric approaches (DEA and FDH). Concerning the results, these naturally differ according to the technique used to approach the frontier. Nevertheless, they have an appearance of solidity, at least with regard to the ordinal concordance among the indices of efficiency obtained by the different approaches, as is demonstrated by the statistical tests used. Finally, we have attempted to search for any relation existing between efficiency and the method (public or private) of managing the services. No significant relation was found between the type of management and efficiencyindices
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
The objective of this study is to analyse the technical or productive efficiency ofthe refuse collection services in 75 municipalities located in the Spanish regionof Catalonia. The analysis has been carried out using various techniques. Firstly we have calculated a deterministic parametric frontier, then a stochastic parametric frontier, and finally, various non-parametric approaches (DEA and FDH). Concerning the results, these naturally differ according to the technique used to approach the frontier. Nevertheless, they have an appearance of solidity, at least with regard to the ordinal concordance among the indices of efficiency obtained by the different approaches, as is demonstrated by the statistical tests used. Finally, we have attempted to search for any relation existing between efficiency and the method (public or private) of managing the services. No significant relation was found between the type of management and efficiencyindices
Resumo:
Changes in bone mineral density and bone strength following treatment with zoledronic acid (ZOL) were measured by quantitative computed analysis (QCT) or dual-energy X-ray absorptiometry (DXA). ZOL treatment increased spine and hip BMD vs placebo, assessed by QCT and DXA. Changes in trabecular bone resulted in increased bone strength. INTRODUCTION: To investigate bone mineral density (BMD) changes in trabecular and cortical bone, estimated by quantitative computed analysis (QCT) or dual-energy X-ray absorptiometry (DXA), and whether zoledronic acid 5 mg (ZOL) affects bone strength. METHODS: In 233 women from a randomized, controlled trial of once-yearly ZOL, lumbar spine, total hip, femoral neck, and trochanter were assessed by DXA and QCT (baseline, Month 36). Mean percentage changes from baseline and between-treatment differences (ZOL vs placebo, t-test) were evaluated. RESULTS: Mean between-treatment differences for lumbar spine BMD were significant by DXA (7.0%, p < 0.01) and QCT (5.7%, p < 0.0001). Between-treatment differences were significant for trabecular spine (p = 0.0017) [non-parametric test], trabecular trochanter (10.7%, p < 0.0001), total hip (10.8%, p < 0.0001), and compressive strength indices at femoral neck (8.6%, p = 0.0001), and trochanter (14.1%, p < 0.0001). CONCLUSIONS: Once-yearly ZOL increased hip and spine BMD vs placebo, assessed by QCT vs DXA. Changes in trabecular bone resulted in increased indices of compressive strength.
Resumo:
This research project conducted in the Psychology Department of the University of Lausanne (Switzerland) evaluated the therapeutic alliance with Hispanic American Patients. From the patient's perspective, the therapeutic alliance was explored in two types of frameworks: the dyadic and the triadic setting. The dyadic setting is the encounter between a therapist (health professional) and a patient who ideally share the same language. The triadic setting is the encounter of a therapist and a patient who speak different languages, but are able to interact using the help of an interpreter. My specific interest focuses on studying the therapeutic alliance in a cross- cultural setting through a mixed methodology. As part of the quantitative phase, non- parametric tests were used to analyze 55 questionnaires of the Therapeutic Alliance for Migrants - Health Professionals' version (QALM-PS). For the qualitative phase, a thematic analysis was used to analyze 20 transcript interviews. While no differences were found concerning the strength of the therapeutic alliance between the triadic and dyadic settings, results showed that the factors that enrich the therapeutic alliance with migrant patients depend more on an emotional alliance (bond) than on a rational alliance (agreements). Indeed, the positive relationship with the interpreter, and especially with the therapist, relies considerably on human qualities and moral values, bringing the conception of humanity as an important need when meeting foreign patients in health care settings. In addition, the quality of communication, which could be attributed to the type of interpreter in the triadic setting, plays an important role in the establishment of a positive therapeutic relationship. Ce projet de recherche mené au Département de psychologie de l'Université de Lausanne (Suisse) a évalué l'alliance thérapeutique avec les patients hispano-américains. Du point de vue du patient, l'alliance thérapeutique a été étudiée dans deux types de dispositifs: le cadre dyadique et triadique. Le cadre dyadique est la rencontre d'un thérapeute (professionnel de la santé) et d'un patient qui, idéalement, partagent la même langue. Le cadre triadique est la rencontre d'un thérapeute et d'un patient qui parlent différentes langues, mais sont capables d'interagir grâce à l'aide d'un interprète. Mon intérêt porte en particulier sur l'étude de l'alliance thérapeutique dans un cadre interculturel au travers d'une méthodologie mixte. Dans la phase quantitative, des tests non paramétriques ont été utilisés pour les analyses des 55 questionnaires de l'alliance thérapeutique pour les migrants, version - professionnels de la santé (QALM-PS). Pour la phase qualitative, une analyse thématique a été utilisée pour l'analyse des 20 entretiens transcrits. Bien qu'aucune différence n'a été constatée en ce qui concerne la force de l'alliance thérapeutique entre les cadres dyadiques et triadiques, les résultats montrent que les facteurs qui enrichissent l'alliance thérapeutique avec les patients migrants dépendent plus de l'alliance émotionnelle (lien) que sur une alliance rationnelle (accords). En effet, la relation positive avec l'interprète, et en particulier avec le thérapeute, repose en grande partie sur des qualités humaines et des valeurs morales, ce qui porte la conception de l'humanité comme un besoin important lors de la rencontre des patients étrangers dans un cadre de santé. En outre, la qualité de la communication, qui pourrait être attribuée au type d'interprète dans le cadre triadique, joue un rôle important dans l'établissement d'une relation thérapeutique positive.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Purpose: Recent studies showed that pericardial fat was independently correlated with the development of coronary artery disease (CAD). The mechanism remains unclear. We aimed at assessing a possible relationship between pericardial fat volume and endothelium-dependent coronary vasomotion, a surrogate of future cardiovascular events.Methods: Fifty healthy volunteers without known CAD or cardiovascular risk factors (CRF) were enrolled. They all underwent a dynamic Rb- 82 cardiac PET/CT to quantify myocardial blood flow (MBF) at rest, during MBF response to cold pressure test (CPT-MBF) and adenosine stress. Pericardial fat volume (PFV) was measured using a 3D volumetric CT method and common biological CRF (glucose and insulin levels, HOMA-IR, cholesterol, triglyceride, hs-CRP). Relationships between MBF response to CPT, PFV and other CRF were assessed using non-parametric Spearman correlation and multivariate regression analysis of variables with significant correlation on univariate analysis (Stata 11.0).Results: All of the 50 participants had normal MBF response to adenosine (2.7±0.6 mL/min/g; 95%CI: 2.6−2.9) and myocardial flow reserve (2.8±0.8; 95%CI: 2.6−3.0) excluding underlying CAD. Simple regression analysis revealed a significant correlation between absolute CPTMBF and triglyceride level (rho = −0.32, p = 0.024) fasting blood insulin (rho = −0.43, p = 0.0024), HOMA-IR (rho = −0.39, p = 0.007) and PFV (rho = −0.52, p = 0.0001). MBF response to adenosine was only correlated with PFV (rho = −0.32, p = 0.026). On multivariate regression analysis PFV emerged as the only significant predictor of MBF response to CPT (p = 0.002).Conclusion: PFV is significantly correlated with endothelium-dependent coronary vasomotion. High PF burden might negatively influence MBF response to CPT, as well as to adenosine stress, even in persons with normal hyperemic myocardial perfusion imaging, suggesting a link between PF and future cardiovascular events. While outside-to-inside adipokines secretion through the arterial wall has been described, our results might suggest an effect upon NO-dependent and -independent vasodilatation. Further studies are needed to elucidate this mechanism.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
This study details a method to statistically determine, on a millisecond scale and for individual subjects, those brain areas whose activity differs between experimental conditions, using single-trial scalp-recorded EEG data. To do this, we non-invasively estimated local field potentials (LFPs) using the ELECTRA distributed inverse solution and applied non-parametric statistical tests at each brain voxel and for each time point. This yields a spatio-temporal activation pattern of differential brain responses. The method is illustrated here in the analysis of auditory-somatosensory (AS) multisensory interactions in four subjects. Differential multisensory responses were temporally and spatially consistent across individuals, with onset at approximately 50 ms and superposition within areas of the posterior superior temporal cortex that have traditionally been considered auditory in their function. The close agreement of these results with previous investigations of AS multisensory interactions suggests that the present approach constitutes a reliable method for studying multisensory processing with the temporal and spatial resolution required to elucidate several existing questions in this field. In particular, the present analyses permit a more direct comparison between human and animal studies of multisensory interactions and can be extended to examine correlation between electrophysiological phenomena and behavior.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.
Resumo:
INTRODUCTION: The trabecular bone score (TBS) is a new parameter that is determined from grey level analysis of DXA images. It relies on the mean thickness and volume fraction of trabecular bone microarchitecture. This was a preliminary case-control study to evaluate the potential diagnostic value of TBS, both alone and combined with bone mineral density (BMDa), in the assessment of vertebral fracture. METHODS: Out of a subject pool of 441 Caucasian, postmenopausal women between the ages of 50 and 80 years, we identified 42 women with osteoporosis-related vertebral fractures, and compared them with 126 age-matched women without any fractures (1 case: 3 controls). Primary outcomes were BMDa and TBS. Inter-group comparisons were undertaken using Student's t-tests and Wilcoxon signed ranks tests for parametric and non-parametric data, respectively. Odds ratios for vertebral fracture were calculated for each incremental one standard deviation decrease in BMDa and TBS, and areas under the receiver operating curve (AUC) calculated and sensitivity analysis were conducted to compare BMDa alone, TBS alone, and the combination of BMDa and TBS. Subgroup analyses were performed specifically for women with osteopenia, and for women with T-score-defined osteoporosis. RESULTS: Across all subjects (n=42, 126) weight and body mass index were greater and BMDa and TBS both less in women with fractures. The odds of vertebral fracture were 3.20 (95% CI, 2.01-5.08) for each incremental decrease in TBS, 1.95 (1.34-2.84) for BMDa, and 3.62 (2.32-5.65) for BMDa + TBS combined. The AUC was greater for TBS than for BMDa (0.746 vs. 0.662, p=0.011). At iso-specificity (61.9%) or iso-sensitivity (61.9%) for both BMDa and TBS, TBS + BMDa sensitivity or specificity was 19.1% or 16.7% greater than for either BMDa or TBS alone. Among subjects with osteoporosis (n=11, 40) both BMDa (p=0.0008) and TBS (p=0.0001) were lower in subjects with fractures, and both OR and AUC (p=0.013) for BMDa + TBS were greater than for BMDa alone (OR=4.04 [2.35-6.92] vs. 2.43 [1.49-3.95]; AUC=0.835 [0.755-0.897] vs. 0.718 [0.627-0.797], p=0.013). Among subjects with osteopenia, TBS was lower in women with fractures (p=0.0296), but BMDa was not (p=0.75). Similarly, the OR for TBS was statistically greater than 1.00 (2.82, 1.27-6.26), but not for BMDa (1.12, 0.56-2.22), as was the AUC (p=0.035), but there was no statistical difference in specificity (p=0.357) or sensitivity (p=0.678). CONCLUSIONS: The trabecular bone score warrants further study as to whether it has any clinical application in osteoporosis detection and the evaluation of fracture risk.