70 resultados para OpenFlow, SDN, Software-Defined Networking, Cloud
Resumo:
The results of numerous phases I and II clinical trials testing the safety and immunogenicity of various cancer vaccine formulations based on cytolytic T lymphocytes (CTLs)-defined tumor antigens have been reported recently. Specific T cell responses can be detected in only a fraction of immunized patients. A smaller but significant fraction of these patients have objective tumor responses. Efficient therapeutic vaccination should aim at boosting naturally occurring anti-tumor responses and at sustaining a large contingent of tumor antigen-specific and fully functional effector T cells at tumor sites.
Resumo:
The emergence of powerful new technologies, the existence of large quantities of data, and increasing demands for the extraction of added value from these technologies and data have created a number of significant challenges for those charged with both corporate and information technology management. The possibilities are great, the expectations high, and the risks significant. Organisations seeking to employ cloud technologies and exploit the value of the data to which they have access, be this in the form of "Big Data" available from different external sources or data held within the organisation, in structured or unstructured formats, need to understand the risks involved in such activities. Data owners have responsibilities towards the subjects of the data and must also, frequently, demonstrate that they are in compliance with current standards, laws and regulations. This thesis sets out to explore the nature of the technologies that organisations might utilise, identify the most pertinent constraints and risks, and propose a framework for the management of data from discovery to external hosting that will allow the most significant risks to be managed through the definition, implementation, and performance of appropriate internal control activities.
Resumo:
The aim of this study was to determine the effect of using video analysis software on the interrater reliability of visual assessments of gait videos in children with cerebral palsy. Two clinicians viewed the same random selection of 20 sagittal and frontal video recordings of 12 children with cerebral palsy routinely acquired during outpatient rehabilitation clinics. Both observers rated these videos in a random sequence for each lower limb using the Observational Gait Scale, once with standard video software and another with video analysis software (Dartfish(®)) which can perform angle and timing measurements. The video analysis software improved interrater agreement, measured by weighted Cohen's kappas, for the total score (κ 0.778→0.809) and all of the items that required angle and/or timing measurements (knee position mid-stance κ 0.344→0.591; hindfoot position mid-stance κ 0.160→0.346; foot contact mid-stance κ 0.700→0.854; timing of heel rise κ 0.769→0.835). The use of video analysis software is an efficient approach to improve the reliability of visual video assessments.
Resumo:
BACKGROUND: Today, recognition and classification of sequence motifs and protein folds is a mature field, thanks to the availability of numerous comprehensive and easy to use software packages and web-based services. Recognition of structural motifs, by comparison, is less well developed and much less frequently used, possibly due to a lack of easily accessible and easy to use software. RESULTS: In this paper, we describe an extension of DeepView/Swiss-PdbViewer through which structural motifs may be defined and searched for in large protein structure databases, and we show that common structural motifs involved in stabilizing protein folds are present in evolutionarily and structurally unrelated proteins, also in deeply buried locations which are not obviously related to protein function. CONCLUSIONS: The possibility to define custom motifs and search for their occurrence in other proteins permits the identification of recurrent arrangements of residues that could have structural implications. The possibility to do so without having to maintain a complex software/hardware installation on site brings this technology to experts and non-experts alike.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
In recent years, protein-ligand docking has become a powerful tool for drug development. Although several approaches suitable for high throughput screening are available, there is a need for methods able to identify binding modes with high accuracy. This accuracy is essential to reliably compute the binding free energy of the ligand. Such methods are needed when the binding mode of lead compounds is not determined experimentally but is needed for structure-based lead optimization. We present here a new docking software, called EADock, that aims at this goal. It uses an hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 A around the center of mass of the ligand position in the crystal structure, and on the contrary to other benchmarks, our algorithm was fed with optimized ligand positions up to 10 A root mean square deviation (RMSD) from the crystal structure, excluding the latter. This validation illustrates the efficiency of our sampling strategy, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures could be explained by the presence of crystal contacts in the experimental structure. Finally, the ability of EADock to accurately predict binding modes on a real application was illustrated by the successful docking of the RGD cyclic pentapeptide on the alphaVbeta3 integrin, starting far away from the binding pocket.
Resumo:
Over the past decade a series of trials of the EORTC Brain Tumor Group (BTG) has substantially influenced and shaped the standard-of-care of primary brain tumors. All these trials were coupled with biological research that has allowed for better understanding of the biology of these tumors. In glioblastoma, EORTC trial 26981/22981 conducted jointly with the National Cancer Institute of Canada Clinical Trials Group showed superiority of concomitant radiochemotherapy with temozolomide over radiotherapy alone. It also identified the first predictive marker for benefit from alkylating agent chemotherapy in glioblastoma, the methylation of the O6-methyl-guanyl-methly-transferase (MGMT) gene promoter. In another large randomized trial, EORTC 26951, adjuvant chemotherapy in anaplastic oligodendroglial tumors was investigated. Despite an improvement in progression-free survival this did not translate into a survival benefit. The third example of a landmark trial is the EORTC 22845 trial. This trial led by the EORTC Radiation Oncology Group forms the basis for an expectative approach to patients with low-grade glioma, as early radiotherapy indeed prolongs time to tumor progression but with no benefit in overall survival. This trial is the key reference in deciding at what time in their disease adult patients with low-grade glioma should be irradiated. Future initiatives will continue to focus on the conduct of controlled trials, rational academic drug development as well as systematic evaluation of tumor tissue including biomarker development for personalized therapy. Important lessons learned in neurooncology are to dare to ask real questions rather than merely rapidly testing new compounds, and the value of well designed trials, including the presence of controls, central pathology review, strict radiology protocols and biobanking. Structurally, the EORTC BTG has evolved into a multidisciplinary group with strong transatlantic alliances. It has contributed to the maturation of neurooncology within the oncological sciences.
Resumo:
The growth rate of acoustic tumors, although slow, varies widely. There may be a continuous spectrum or distinct groups of tumor growth rates. Clinical, audiologic, and conventional histologic tests have failed to shed any light on this problem. Modern immunohistochemical methods may stand a better chance. The Ki-67 monoclonal antibody stains proliferating cells and is used in this study to investigate the growth fraction of 13 skull base schwannomas. The acoustic tumors can be divided into two different growth groups, one with a rate five times the other. The literature is reviewed to see if this differentiation is borne out by the radiologic studies. Distinct growth rates have been reported: one very slow, taking 50 years to reach 1 cm in diameter, a second rate with a diameter increase of 0.2 cm/year, and a third rate five times the second, with a 1.0 cm increase in diameter per year. A fourth group growing at 2.5 cm/year is postulated, but these tumors cannot be followed for long radiologically, since symptoms demand surgical intervention. The clinical implications of these separate growth rates are discussed.
Resumo:
A novel melanoma-associated differentiation Ag whose surface expression can be enhanced or induced by IFN-gamma was identified by mAb Me14/D12. Testing of numerous tumor cell lines and tumor tissue sections showed that Me14/D12-defined Ag was present not only on melanoma but also on other tumor lines of neuroectodermal origin such as gliomas and neuroblastomas and on some lymphoblastic B cell lines, on monocytes and macrophages. Immunoprecipitation by mAb Me14/D12 of lysates from [35S]methionine-labeled melanoma cells analyzed by SDS-PAGE revealed two polypeptide chains of 33 and 38 KDa, both under reducing and nonreducing conditions. Cross-linking experiments indicated that the two chains were present at the cell surface as a dimeric structure. Two-dimensional gel electrophoresis showed that the two chains of 33 and 38 KDa had isoelectric points of 6.2 and 5.7, respectively. Treatment of the melanoma cells with tunicamycin, an inhibitor of N-linked glycosylation, resulted in a reduction of the Mr from 33 to 24 KDa and from 38 to 26 KDa. Peptide maps obtained after Staphylococcus aureus V8 protease digestion showed no shared peptides between the two chains. Although biochemical data indicate that Me14/D12 molecules do not correspond to any known MHC class II Ag, their dimeric structure, tissue distribution, and regulation of IFN-gamma suggest that they could represent a new member of the MHC class II family.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
Early reperfusion with prompt re-establishment of coronary blood flow improves survival in patients suffering from acute ST-elevation myocardial infarction (STEMI). Leaving systemic thrombolysis for primary percutaneous coronary intervention (PCI) is justified by clinical results in favor of PCI. Nevertheless, primary PCI necessitates additional transfer time and requires an efficient territorial networking. The present article summarizes the up-to-dated management of patients with acute STEMI and/or overt cardiogenic shock.
Resumo:
INTRODUCTION: Video records are widely used to analyze performance in alpine skiing at professional or amateur level. Parts of these analyses require the labeling of some movements (i.e. determining when specific events occur). If differences among coaches and differences for the same coach between different dates are expected, they have never been quantified. Moreover, knowing these differences is essential to determine which parameters reliable should be used. This study aimed to quantify the precision and the repeatability for alpine skiing coaches of various levels, as it is done in other fields (Koo et al, 2005). METHODS: A software similar to commercialized products was designed to allow video analyses. 15 coaches divided into 3 groups (5 amateur coaches (G1), 5 professional instructors (G2) and 5 semi-professional coaches (G3)) were enrolled. They were asked to label 15 timing parameters (TP) according to the Swiss ski manual (Terribilini et al, 2001) for each curve. TP included phases (initiation, steering I-II), body and ski movements (e.g. rotation, weighting, extension, balance). Three video sequences sampled at 25 Hz were used and one curve per video was labeled. The first video was used to familiarize the analyzer to the software. The two other videos, corresponding to slalom and giant slalom, were considered for the analysis. G1 realized twice the analysis (A1 and A2) at different dates and TP were randomized between both analyses. Reference TP were considered as the median of G2 and G3 at A1. The precision was defined as the RMS difference between individual TP and reference TP, whereas the repeatability was calculated as the RMS difference between individual TP at A1 and at A2. RESULTS AND DISCUSSION: For G1, G2 and G3, a precision of +/-5.6 frames, +/-3.0 and +/-2.0 frames, was respectively obtained. These results showed that G2 was more precise than G1, and G3 more precise than G2, were in accordance with group levels. The repeatability for G1 was +/-3.1 frames. Furthermore, differences among TP precision were observed, considering G2 and G3, with largest differences of +/-5.9 frames for "body counter rotation movement in steering phase II", and of 0.8 frame for "ski unweighting in initiation phase". CONCLUSION: This study quantified coach ability to label video in term of precision and repeatability. The best precision was obtained for G3 and was of +/-0.08s, which corresponds to +/-6.5% of the curve cycle. Regarding the repeatability, we obtained a result of +/-0.12s for G1, corresponding to +/-12% of the curve cycle. The repeatability of G2 and G3 are expected to be lower than the precision of G1 and the corresponding repeatability will be assessed soon. In conclusion, our results indicate that the labeling of video records is reliable for some TP, whereas caution is required for others. REFERENCES Koo S, Gold MD, Andriacchi TP. (2005). Osteoarthritis, 13, 782-789. Terribilini M, et al. (2001). Swiss Ski manual, 29-46. IASS, Lucerne.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.