877 resultados para Multi-objective evolutionary algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Mantle cell lymphoma (MCL) accounts for 6% of all B-cell lymphomas and remains incurable for most patients. Those who relapse after first line therapy or hematopoietic stem cell transplantation have a dismal prognosis with short response duration after salvage therapy. On a molecular level, MCL is characterised by the translocation t[11;14] leading to Cyclin D1 overexpression. Cyclin D1 is downstream of the mammalian target of rapamycin (mTOR) kinase and can be effectively blocked by mTOR inhibitors such as temsirolimus. We set out to define the single agent activity of the orally available mTOR inhibitor everolimus (RAD001) in a prospective, multi-centre trial in patients with relapsed or refractory MCL (NCT00516412). The study was performed in collaboration with the EU-MCL network. Methods: Eligible patients with histologically/cytologically confirmed relapsed (not more than 3 prior lines of systemic treatment) or refractory MCL received everolimus 10 mg orally daily on day 1 - 28 of each cycle (4 weeks) for 6 cycles or until disease progression. The primary endpoint was the best objective response with adverse reactions, time to progression (TTP), time to treatment failure, response duration and molecular response as secondary endpoints. A response rate of 10% was considered uninteresting and, conversely, promising if 30%. The required sample size was 35 pts using the Simon's optimal two-stage design with 90% power and 5% significance. Results: A total of 36 patients with 35 evaluable patients from 19 centers were enrolled between August 2007 and January 2010. The median age was 69.4 years (range 40.1 to 84.9 years), with 22 males and 13 females. Thirty patients presented with relapsed and 5 with refractory MCL with a median of two prior therapies. Treatment was generally well tolerated with anemia (11%), thrombocytopenia (11%), neutropenia (8%), diarrhea (3%) and fatigue (3%) being the most frequent complications of CTC grade III or higher. Eighteen patients received 6 or more cycles of everolimus treatment. The objective response rate was 20% (95% CI: 8-37%) with 2 CR, 5 PR, 17 SD, and 11 PD. At a median follow-up of 6 months, TTP was 5.45 months (95% CI: 2.8-8.2 months) for the entire population and 10.6 months for the 18 patients receiving 6 or more cycles of treatment. Conclusion: This study demonstrates that single agent everolimus 10 mg once daily orally is well tolerated. The null hypothesis of inactivity could be rejected indicating a moderate anti-lymphoma activity in relapsed/refractory MCL. Further studies of either everolimus in combination with chemotherapy or as single agent for maintenance treatment are warranted in MCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Black-box optimization problems (BBOP) are de ned as those optimization problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program). This paper is focussed on BBOPs that arise in the eld of insurance, and more speci cally in reinsurance problems. In this area, the complexity of the models and assumptions considered to de ne the reinsurance rules and conditions produces hard black-box optimization problems, that must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in BBOP, so new computational paradigms must be applied to solve these problems. In this paper we show the performance of two evolutionary-based techniques (Evolutionary Programming and Particle Swarm Optimization). We provide an analysis in three BBOP in reinsurance, where the evolutionary-based approaches exhibit an excellent behaviour, nding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context.-Unlike the small bowel, the colorectal mucosa is seldom the site of metastatic disease. Objective.-To determine the incidence of truly colorectal metastases, and subsequent clinicopathologic findings, in a substantial colorectal cancer population collected from 7 European centers. Design.-During the last decade, 10 365 patients were identified as having colorectal malignant tumors, other than systemic diseases. Data collected included patient demographics, clinical symptoms, treatment, the presence of metastases in other sites, disease-free interval, follow-up, and overall survival. All secondary tumors resulting from direct invasion from malignant tumors of the contiguous organs were excluded, as well as those resulting from lymph node metastases or peritoneal seeding. Results.-Only 35 patients were included (10 men) with a median age of 59 years. They presented with obstruction, bleeding, abdominal pain, or perforation. The leading source of metastases was the breast, followed by melanoma. Metastases were synchronous in 3 cases. The mean disease-free interval for the remaining cases was 6.61 years. Surgical resection was performed in 28 cases. Follow-up was available for 26 patients; all had died, with a mean survival time of 10.67 months (range, 1-41 months). Conclusions.-Colorectal metastases are exceptional (0.338%) with the breast as a leading source of metastases; they still represent a late stage of disease and reflect a poor prognosis. Therefore, the pathologist should be alert for the possibility of secondary tumors when studying large bowel biopsies. Any therapy is usually palliative, but our results suggest that prolonged survival after surgery and complementary therapy can be obtained in some patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To demonstrate the validity and reliability of volumetric quantitative computed tomography (vQCT) with multi-slice computed tomography (MSCT) and dual energy X-ray absorptiometry (DXA) for hip bone mineral density (BMD) measurements, and to compare the differences between the two techniques in discriminating postmenopausal women with osteoporosis-related vertebral fractures from those without. METHODS: Ninety subjects were enrolled and divided into three groups based on the BMD values of the lumbar spine and/or the femoral neck by DXA. Groups 1 and 2 consisted of postmenopausal women with BMD changes <-2SD, with and without radiographically confirmed vertebral fracture (n=11 and 33, respectively). Group 3 comprised normal controls with BMD changes > or =-1SD (n=46). Post-MSCT (GE, LightSpeed16) scan reconstructed images of the abdominal-pelvic region, 1.25 mm thick per slice, were processed by OsteoCAD software to calculate the following parameters: volumetric BMD values of trabecular bone (TRAB), cortical bone (CORT), and integral bone (INTGL) of the left femoral neck, femoral neck axis length (NAL), and minimum cross-section area (mCSA). DXA BMD measurements of the lumbar spine (AP-SPINE) and the left femoral neck (NECK) also were performed for each subject. RESULTS: The values of all seven parameters were significantly lower in subjects of Groups 1 and 2 than in normal postmenopausal women (P<0.05, respectively). Comparing Groups 1 and 2, 3D-TRAB and 3D-INTGL were significantly lower in postmenopausal women with vertebral fracture(s) [(109.8+/-9.61) and (243.3+/-33.0) mg/cm3, respectively] than in those without [(148.9+/-7.47) and (285.4+/-17.8) mg/cm(3), respectively] (P<0.05, respectively), but no significant differences were evident in AP-SPINE or NECK BMD. CONCLUSION: the femoral neck-derived volumetric BMD parameters using vQCT appeared better than the DXA-derived ones in discriminating osteoporotic postmenopausal women with vertebral fractures from those without. vQCT might be useful to evaluate the effect of osteoporotic vertebral fracture status on changes in bone mass in the femoral neck.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To assess the importance of spirituality and religious coping among outpatients with a DSM-IV diagnosis of schizophrenia or schizoaffective disorder living in three countries. Method: A total of 276 outpatients (92 from Geneva, Switzerland, 121 from Trois-Rivières, Canada, and 63 from Durham, North Carolina), aged 18-65, were administered a semi-structured interview on the role of spirituality and religiousness in their lives and to cope with their illness. Results: Religion is important for outpatients in each of the three country sites, and religious involvement is higher than in the general population. Religion was helpful (i.e., provided a positive sense of self and positive coping with the illness) among 87% of the participants and harmful (a source of despair and suffering) among 13%. Helpful religion was associated with better social, clinical and psychological status. The opposite was observed for the harmful aspects of religion. In addition, religion sometimes conflicted with psychiatric treatment. Conclusions: These results indicate that outpatients with schizophrenia or schizoaffective disorder often use spirituality and religion to cope with their illness, basically positively, yet sometimes negatively. These results underscore the importance of clinicians taking into account the spiritual and religious lives of patients with schizophrenia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Mantle cell lymphoma (MCL) accounts for 6% of all B-cell lymphomas and remains incurable for most patients. Those who relapse after first line therapy or hematopoietic stem cell transplantation have a dismal prognosis with short response duration after salvage therapy. On a molecular level, MCL is characterised by the translocation t[11;14] leading to Cyclin D1 overexpression. Cyclin D1 is downstream of the mammalian target of rapamycin (mTOR) kinase and can be effectively blocked by mTOR inhibitors such as temsirolimus. We set out to define the single agent activity of the orally available mTOR inhibitor everolimus (RAD001) in a prospective, multi-centre trial in patients with relapsed or refractory MCL (NCT00516412). The study was performed in collaboration with the EU-MCL network. Methods: Eligible patients with histologically/cytologically confirmed relapsed (not more than 3 prior lines of systemic treatment) or refractory MCL received everolimus 10 mg orally daily on day 1 - 28 of each cycle (4 weeks) for 6 cycles or until disease progression. The primary endpoint was the best objective response with adverse reactions, time to progression (TTP), time to treatment failure, response duration and molecular response as secondary endpoints. A response rate of ≤ 10% was considered uninteresting and, conversely, promising if ≥ 30%. The required sample size was 35 pts using the Simon's optimal two-stage design with 90% power and 5% significance. Results: A total of 36 patients with 35 evaluable patients from 19 centers were enrolled between August 2007 and January 2010. The median age was 69.4 years (range 40.1 to 84.9 years), with 22 males and 13 females. Thirty patients presented with relapsed and 5 with refractory MCL with a median of two prior therapies. Treatment was generally well tolerated with anemia (11%), thrombocytopenia (11%), neutropenia (8%), diarrhea (3%) and fatigue (3%) being the most frequent complications of CTC grade III or higher. Eighteen patients received 6 or more cycles of everolimus treatment. The objective response rate was 20% (95% CI: 8-37%) with 2 CR, 5 PR, 17 SD, and 11 PD. At a median follow-up of 6 months, TTP was 5.45 months (95% CI: 2.8-8.2 months) for the entire population and 10.6 months for the 18 patients receiving 6 or more cycles of treatment. Conclusion: This study demonstrates that single agent everolimus 10 mg once daily orally is well tolerated. The null hypothesis of inactivity could be rejected indicating a moderate anti-lymphoma activity in relapsed/refractory MCL. Further studies of either everolimus in combination with chemotherapy or as single agent for maintenance treatment are warranted in MCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The HACEK organisms (Haemophilus species, Aggregatibacter species, Cardiobacterium hominis, Eikenella corrodens, and Kingella species) are rare causes of infective endocarditis (IE). The objective of this study is to describe the clinical characteristics and outcomes of patients with HACEK endocarditis (HE) in a large multi-national cohort. Patients hospitalized with definite or possible infective endocarditis by the International Collaboration on Endocarditis Prospective Cohort Study in 64 hospitals from 28 countries were included and characteristics of HE patients compared with IE due to other pathogens. Of 5591 patients enrolled, 77 (1.4%) had HE. HE was associated with a younger age (47 vs. 61 years; p<0.001), a higher prevalence of immunologic/vascular manifestations (32% vs. 20%; p<0.008) and stroke (25% vs. 17% p = 0.05) but a lower prevalence of congestive heart failure (15% vs. 30%; p = 0.004), death in-hospital (4% vs. 18%; p = 0.001) or after 1 year follow-up (6% vs. 20%; p = 0.01) than IE due to other pathogens (n = 5514). On multivariable analysis, stroke was associated with mitral valve vegetations (OR 3.60; CI 1.34-9.65; p<0.01) and younger age (OR 0.62; CI 0.49-0.90; p<0.01). The overall outcome of HE was excellent with the in-hospital mortality (4%) significantly better than for non-HE (18%; p<0.001). Prosthetic valve endocarditis was more common in HE (35%) than non-HE (24%). The outcome of prosthetic valve and native valve HE was excellent whether treated medically or with surgery. Current treatment is very successful for the management of both native valve prosthetic valve HE but further studies are needed to determine why HE has a predilection for younger people and to cause stroke. The small number of patients and observational design limit inferences on treatment strategies. Self selection of study sites limits epidemiological inferences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop two models for an inventory system in which the distributormanages the inventory at the retailers location. These type of systems correspondto the Vendor Managed Inventory (VMI) systems described ib the literature. Thesesystems are very common in many different types of industries, such as retailingand manufacturing, although assuming different characteristics.The objective of our model is to minimize total inventory cost for the distributorin a multi-period multi-retailer setting. The inventory system includes holdingand stock-out costs and we study the case whre an additional fixed setup cost ischarged per delivery.We construct a numerical experiment to analyze the model bahavior and observe theimpact of the characteristics of the model on the solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an extension of the multi-scale finite-volume (MSFV) method is devised, which allows to Simulate flow and transport in reservoirs with complex well configurations. The new framework fits nicely into the data Structure of the original MSFV method,and has the important property that large patches covering the whole well are not required. For each well. an additional degree of freedom is introduced. While the treatment of pressure-constraint wells is trivial (the well-bore reference pressure is explicitly specified), additional equations have to be solved to obtain the unknown well-bore pressure of rate-constraint wells. Numerical Simulations of test cases with multiple complex wells demonstrate the ability of the new algorithm to capture the interference between the various wells and the reservoir accurately. (c) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and objective: Optimal care of diabetic patients (DPs) decreases the risk of complications. Close blood glucose monitoring can improve patient outcomes and shorten hospital stay. The objective of this pilot study was to evaluate the treatment of hospitalized DPs according to the current standards, including their diabetic treatment and drugs to prevent diabetes related complications [=guardian drugs: angiotensin converting enzyme inhibitors (ACEI) or Angiotensin II Receptor Blockers (ARB), antiplatelet drugs, statins]. Guidelines of the American Diabetes Association (ADA) [1] were used as reference as they were the most recent and exhaustive for hospital care. Design: Observational pilot study: analysis of the medical records of all DPs seen by the clinical pharmacists during medical rounds in different hospital units. An assessment was made by assigning points for fulfilling the different criteria according to ADA and then by dividing the total by the maximum achievable points (scale 0-1; 1 = all criteria fulfilled). Setting: Different Internal Medicine and Geriatric Units of the (multi-site) Ho^pital du Valais. Main outcome measures: - Completeness of diabetes-related information: type of diabetes, medical history, weight, albuminuria status, renal function, blood pressure, (recent) lipid profile. - Management of blood glucose: Hb1Ac, glycemic control, plan for treating hyper-/hypoglycaemia. - Presence of guardian drugs if indicated. Results: Medical records of 42 patients in 10 different units were analysed (18 women, 24 men, mean age 75.4 ± 11 years). 41 had type 2 diabetes. - Completeness of diabetes-related information: 0.8 ± 0.1. Information often missing: insulin-dependence (43%) and lipid profile (86%). - Management of blood glucose: 0.5 ± 0.2. 15 patients had suboptimal glycemic balance (target glycaemia 7.2-11.2 mmol/ l, with values[11.2 or\3.8 mmol/l, or Hb1Ac[7%), 10 patients had a deregulated balance (more than 10 values[11.2 mmol/l or \3.8 mmol/l and even values[15 mmol/l). - Presence of guardian drugs if indicated: ACEI/ARB: 19 of 23 patients (82.6%), statin: 16 of 40 patients (40%), antiplatelet drug: 16 of 39 patients (41%). Conclusions: Blood glucose control was insufficient in many DPs and prescription of statins and antiplatelet drugs was often missing. If confirmed by a larger study, these two points need to be optimised. As it is not always possible and appropriate to make those changes during hospital stay, a further project should assess and optimise diabetes care across both inpatient and outpatient settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to investigate whether it is possible to pool together diffusion spectrum imaging data from four different scanners, located at three different sites. Two of the scanners had identical configuration whereas two did not. To measure the variability, we extracted three scalar maps (ADC, FA and GFA) from the DSI and utilized a region and a tract-based analysis. Additionally, a phantom study was performed to rule out some potential factors arising from the scanner performance in case some systematic bias occurred in the subject study. This work was split into three experiments: intra-scanner reproducibility, reproducibility with twin-scanner settings and reproducibility with other configurations. Overall for the intra-scanner and twin-scanner experiments, the region-based analysis coefficient of variation (CV) was in a range of 1%-4.2% and below 3% for almost every bundle for the tract-based analysis. The uncinate fasciculus showed the worst reproducibility, especially for FA and GFA values (CV 3.7-6%). For the GFA and FA maps, an ICC value of 0.7 and above is observed in almost all the regions/tracts. Looking at the last experiment, it was found that there is a very high similarity of the outcomes from the two scanners with identical setting. However, this was not the case for the two other imagers. Given the fact that the overall variation in our study is low for the imagers with identical settings, our findings support the feasibility of cross-site pooling of DSI data from identical scanners.