812 resultados para systematic methods
Resumo:
In this paper, we review the advances of monocular model-based tracking for last ten years period until 2014. In 2005, Lepetit, et. al, [19] reviewed the status of monocular model based rigid body tracking. Since then, direct 3D tracking has become quite popular research area, but monocular model-based tracking should still not be forgotten. We mainly focus on tracking, which could be applied to aug- mented reality, but also some other applications are covered. Given the wide subject area this paper tries to give a broad view on the research that has been conducted, giving the reader an introduction to the different disciplines that are tightly related to model-based tracking. The work has been conducted by searching through well known academic search databases in a systematic manner, and by selecting certain publications for closer examination. We analyze the results by dividing the found papers into different categories by their way of implementation. The issues which have not yet been solved are discussed. We also discuss on emerging model-based methods such as fusing different types of features and region-based pose estimation which could show the way for future research in this subject.
Resumo:
This paper proposes a systematic framework for analyzing the dynamic effects of permanent and transitory shocks on a system of \"n\" economic variables.
Resumo:
This paper proposes a systematic framework for analyzing the dynamic effects of permanent and transitory shocks on a system of n economic variables.
Resumo:
Objectif principal: Il n’est pas démontré que les interventions visant à maîtriser voire modérer la médicamentation de patients atteints d’hypertension peuvent améliorer leur gestion de la maladie. Cette revue systématique propose d’évaluer les programmes de gestion contrôlée de la médicamentation pour l’hypertension, en s’appuyant sur la mesure de l’observance des traitements par les patients (CMGM). Design: Revue systématique. Sources de données: MEDLINE, EMBASE, CENTRAL, résumés de conférences internationales sur l’hypertension et bibliographies des articles pertinents. Méthodes: Des essais contrôlés randomisés (ECR) et des études observationnelles (EO) ont été évalués par 2 réviseurs indépendants. L’évaluation de la qualité (de ce matériel) a été réalisée avec l’aide de l’outil de Cochrane de mesure du risque de biais, et a été estimée selon une échelle à quatre niveaux de qualité Une synthèse narrative des données a été effectuée en raison de l'hétérogénéité importante des études. Résultats: 13 études (8 ECR, 5 EO) de 2150 patients hypertendus ont été prises en compte. Parmi elles, 5 études de CMGM avec l’utilisation de dispositifs électroniques comme seule intervention ont relevé une diminution de la tension artérielle (TA), qui pourrait cependant être expliquée par les biais de mesure. L’amélioration à court terme de la TA sous CMGM dans les interventions complexes a été révélée dans 4 études à qualité faible ou modérée. Dans 4 autres études sur les soins intégrés de qualité supérieure, il n'a pas été possible de distinguer l'impact de la composante CMGM, celle-ci pouvant être compromise par des traitements médicamenteux. L’ensemble des études semble par ailleurs montrer qu’un feed-back régulier au médecin traitant peut être un élément essentiel d’efficacité des traitements CMGM, et peut être facilement assuré par une infirmière ou un pharmacien, grâce à des outils de communication appropriés. Conclusions: Aucune preuve convaincante de l'efficacité des traitements CMGM comme technologie de la santé n’a été établie en raison de designs non-optimaux des études identifiées et des ualités méthodologiques insatisfaisantes de celles-ci. Les recherches futures devraient : suivre les normes de qualité approuvées et les recommandations cliniques actuelles pour le traitement de l'hypertension, inclure des groupes spécifiques de patients avec des problèmes d’attachement aux traitements, et considérer les résultats cliniques et économiques de l'organisation de soins ainsi que les observations rapportées par les patients.
Resumo:
Objective. This study was performed to determine the prevalence of and associated risk factors for cardiovascular disease (CVD) in Latin American (LA) patients with systemic lupus erythematosus (SLE). Methods. First, a cross-sectional analytical study was conducted in 310 Colombian patients with SLE in whom CVD was assessed. Associated factors were examined by multivariate regression analyses. Second, a systematic review of the literature on CVD in SLE in LA was performed. Results. There were 133 (36.5%) Colombian SLE patients with CVD. Dyslipidemia, smoking, coffee consumption, and pleural effusion were positively associated with CVD. An independent effect of coffee consumption and cigarette on CVD was found regardless of gender and duration of disease. In the systematic review, 60 articles fulfilling the eligibility criteria were included. A wide range of CVD prevalence was found (4%–79.5%). Several studies reported ancestry, genetic factors, and polyautoimmunity as novel risk factors for such a condition.Conclusions. A high rate of CVD is observed in LA patients with SLE. Awareness of the observed risk factors should encourage preventive population strategies for CVD in patients with SLE aimed at facilitating the suppression of cigarette smoking and coffee consumption as well as at the tight control of dyslipidemia and other modifiable risk factors.
Resumo:
Background: Genetic and epigenetic factors interacting with the environment over time are the main causes of complex diseases such as autoimmune diseases (ADs). Among the environmental factors are organic solvents (OSs), which are chemical compounds used routinely in commercial industries. Since controversy exists over whether ADs are caused by OSs, a systematic review and meta-analysis were performed to assess the association between OSs and ADs. Methods and Findings: The systematic search was done in the PubMed, SCOPUS, SciELO and LILACS databases up to February 2012. Any type of study that used accepted classification criteria for ADs and had information about exposure to OSs was selected. Out of a total of 103 articles retrieved, 33 were finally included in the meta-analysis. The final odds ratios (ORs) and 95% confidence intervals (CIs) were obtained by the random effect model. A sensitivity analysis confirmed results were not sensitive to restrictions on the data included. Publication bias was trivial. Exposure to OSs was associated to systemic sclerosis, primary systemic vasculitis and multiple sclerosis individually and also to all the ADs evaluated and taken together as a single trait (OR: 1.54; 95% CI: 1.25-1.92; p-value, 0.001). Conclusion: Exposure to OSs is a risk factor for developing ADs. As a corollary, individuals with non-modifiable risk factors (i.e., familial autoimmunity or carrying genetic factors) should avoid any exposure to OSs in order to avoid increasing their risk of ADs.
Resumo:
Background: A primary characteristic of complex genetic diseases is that affected individuals tend to cluster in families (that is, familial aggregation). Aggregation of the same autoimmune condition, also referred to as familial autoimmune disease, has been extensively evaluated. However, aggregation of diverse autoimmune diseases, also known as familial autoimmunity, has been overlooked. Therefore, a systematic review and meta-analysis were performed aimed at gathering evidence about this topic. Methods: Familial autoimmunity was investigated in five major autoimmune diseases, namely, rheumatoid arthritis, systemic lupus erythematosus, autoimmune thyroid disease, multiple sclerosis and type 1 diabetes mellitus. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed. Articles were searched in Pubmed and Embase databases. Results: Out of a total of 61 articles, 44 were selected for final analysis. Familial autoimmunity was found in all the autoimmune diseases investigated. Aggregation of autoimmune thyroid disease, followed by systemic lupus erythematosus and rheumatoid arthritis, was the most encountered. Conclusions: Familial autoimmunity is a frequently seen condition. Further study of familial autoimmunity will help to decipher the common mechanisms of autoimmunity.
Resumo:
The influence of the basis set size and the correlation energy in the static electrical properties of the CO molecule is assessed. In particular, we have studied both the nuclear relaxation and the vibrational contributions to the static molecular electrical properties, the vibrational Stark effect (VSE) and the vibrational intensity effect (VIE). From a mathematical point of view, when a static and uniform electric field is applied to a molecule, the energy of this system can be expressed in terms of a double power series with respect to the bond length and to the field strength. From the power series expansion of the potential energy, field-dependent expressions for the equilibrium geometry, for the potential energy and for the force constant are obtained. The nuclear relaxation and vibrational contributions to the molecular electrical properties are analyzed in terms of the derivatives of the electronic molecular properties. In general, the results presented show that accurate inclusion of the correlation energy and large basis sets are needed to calculate the molecular electrical properties and their derivatives with respect to either nuclear displacements or/and field strength. With respect to experimental data, the calculated power series coefficients are overestimated by the SCF, CISD, and QCISD methods. On the contrary, perturbation methods (MP2 and MP4) tend to underestimate them. In average and using the 6-311 + G(3df) basis set and for the CO molecule, the nuclear relaxation and the vibrational contributions to the molecular electrical properties amount to 11.7%, 3.3%, and 69.7% of the purely electronic μ, α, and β values, respectively
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
The systematic sampling (SYS) design (Madow and Madow, 1944) is widely used by statistical offices due to its simplicity and efficiency (e.g., Iachan, 1982). But it suffers from a serious defect, namely, that it is impossible to unbiasedly estimate the sampling variance (Iachan, 1982) and usual variance estimators (Yates and Grundy, 1953) are inadequate and can overestimate the variance significantly (Särndal et al., 1992). We propose a novel variance estimator which is less biased and that can be implemented with any given population order. We will justify this estimator theoretically and with a Monte Carlo simulation study.
Resumo:
In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Purpose – This paper seeks to examine the nature of “service innovation” in the facilities management (FM) context. It reviews recent thinking on “service innovation” as distinct from “product innovation”. Applying these contemporary perspectives it describes UK case studies of 11 innovations in different FM organisations. These include both in-house client-based innovations and third-party innovations. Design/methodology/approach – The study described in the paper encompasses 11 different innovations that constitute a mix of process, product and practice innovations. All of the innovations stem from UK-based organisations that were subject to in-depth interviews regarding the identification, screening, commitment of resources and implementation of the selected innovations. Findings – The research suggested that service innovation is highly active in the UK FM sector. However, the process of innovation rarely followed a common formalized path. Generally, the innovations were one-shot commitments at the early stage. None of the innovations studied failed to proceed to full adoption stage. This was either due to the reluctance of participating organisations to volunteer “tested but unsuccessful” innovations or the absence of any trial methods that might have exposed an innovations shortcomings. Research limitations/implications – The selection of innovations was restricted to the UK context. Moreover, the choice of innovations was partly determined by the innovating organisation. This selection process appeared to emphasise “one-shot” high profile technological innovations, typically associated with software. This may have been at the expense of less resource intensive, bottom-up innovations. Practical implications – This paper suggests that there is a role for “research and innovation” teams within larger FM organisations, whether they are client-based or third-party. Central to this philosophy is an approach that is open to the possibility of failure. The innovations studied were risk averse with a firm commitment to proceed at the early stage. Originality/value – This paper introduces new thinking on the subject of “service innovation” to the context of FM. It presents research and development as a planned solution to innovation. This approach will enable service organisations to fully test and exploit service innovations.
Resumo:
Aim: Previous systematic reviews have found that drug-related morbidity accounts for 4.3% of preventable hospital admissions. None, however, has identified the drugs most commonly responsible for preventable hospital admissions. The aims of this study were to estimate the percentage of preventable drug-related hospital admissions, the most common drug causes of preventable hospital admissions and the most common underlying causes of preventable drug-related admissions. Methods: Bibliographic databases and reference lists from eligible articles and study authors were the sources for data. Seventeen prospective observational studies reporting the proportion of preventable drug-related hospital admissions, causative drugs and/or the underlying causes of hospital admissions were selected. Included studies used multiple reviewers and/or explicit criteria to assess causality and preventability of hospital admissions. Two investigators abstracted data from all included studies using a purpose-made data extraction form. Results: From 13 papers the median percentage of preventable drug-related admissions to hospital was 3.7% (range 1.4-15.4). From nine papers the majority (51%) of preventable drug-related admissions involved either antiplatelets (16%), diuretics (16%), nonsteroidal anti-inflammatory drugs (11%) or anticoagulants (8%). From five studies the median proportion of preventable drug-related admissions associated with prescribing problems was 30.6% (range 11.1-41.8), with adherence problems 33.3% (range 20.9-41.7) and with monitoring problems 22.2% (range 0-31.3). Conclusions: Four groups of drugs account for more than 50% of the drug groups associated with preventable drug-related hospital admissions. Concentrating interventions on these drug groups could reduce appreciably the number of preventable drug-related admissions to hospital from primary care.
Resumo:
Background: Population monitoring has been introduced in UK primary schools in an effort to track the growing obesity epidemic. It has been argued that parents should be informed of their child's results, but is there evidence that moving from monitoring to screening would be effective? We describe what is known about the effectiveness of monitoring and screening for overweight and obesity in primary school children and highlight areas where evidence is lacking and research should be prioritised. Design: Systematic review with discussion of evidence gaps and future research. Data sources: Published and unpublished studies ( any language) from electronic databases ( inception to July 2005), clinical experts, Primary Care Trusts and Strategic Health Authorities, and reference lists of retrieved studies. Review methods: We included any study that evaluated measures of overweight and obesity as part of a population-level assessment and excluded studies whose primary outcome measure was prevalence. Results: There were no trials assessing the effectiveness of monitoring or screening for overweight and obesity. Studies focussed on the diagnostic accuracy of measurements. Information on the attitudes of children, parents and health professionals to monitoring was extremely sparse. Conclusions: Our review found a lack of data on the potential impact of population monitoring or screening for obesity and more research is indicated. Identification of effective weight reduction strategies for children and clarification of the role of preventative measures are priorities. It is difficult to see how screening to identify individual children can be justified without effective interventions.
Resumo:
Objectives: To clarify the role of growth monitoring in primary school children, including obesity, and to examine issues that might impact on the effectiveness and cost-effectiveness of such programmes. Data sources: Electronic databases were searched up to July 2005. Experts in the field were also consulted. Review methods: Data extraction and quality assessment were performed on studies meeting the review's inclusion criteria. The performance of growth monitoring to detect disorders of stature and obesity was evaluated against National Screening Committee (NSC) criteria. Results: In the 31 studies that were included in the review, there were no controlled trials of the impact of growth monitoring and no studies of the diagnostic accuracy of different methods for growth monitoring. Analysis of the studies that presented a 'diagnostic yield' of growth monitoring suggested that one-off screening might identify between 1: 545 and 1: 1793 new cases of potentially treatable conditions. Economic modelling suggested that growth monitoring is associated with health improvements [ incremental cost per quality-adjusted life-year (QALY) of pound 9500] and indicated that monitoring was cost-effective 100% of the time over the given probability distributions for a willingness to pay threshold of pound 30,000 per QALY. Studies of obesity focused on the performance of body mass index against measures of body fat. A number of issues relating to human resources required for growth monitoring were identified, but data on attitudes to growth monitoring were extremely sparse. Preliminary findings from economic modelling suggested that primary prevention may be the most cost-effective approach to obesity management, but the model incorporated a great deal of uncertainty. Conclusions: This review has indicated the potential utility and cost-effectiveness of growth monitoring in terms of increased detection of stature-related disorders. It has also pointed strongly to the need for further research. Growth monitoring does not currently meet all NSC criteria. However, it is questionable whether some of these criteria can be meaningfully applied to growth monitoring given that short stature is not a disease in itself, but is used as a marker for a range of pathologies and as an indicator of general health status. Identification of effective interventions for the treatment of obesity is likely to be considered a prerequisite to any move from monitoring to a screening programme designed to identify individual overweight and obese children. Similarly, further long-term studies of the predictors of obesity-related co-morbidities in adulthood are warranted. A cluster randomised trial comparing growth monitoring strategies with no growth monitoring in the general population would most reliably determine the clinical effectiveness of growth monitoring. Studies of diagnostic accuracy, alongside evidence of effective treatment strategies, could provide an alternative approach. In this context, careful consideration would need to be given to target conditions and intervention thresholds. Diagnostic accuracy studies would require long-term follow-up of both short and normal children to determine sensitivity and specificity of growth monitoring.