40 resultados para mesh: Software
Resumo:
BACKGROUND: Incarcerated hernias represent about 5-15 % of all operated hernias. Tension-free mesh is the preferred technique for elective surgery due to low recurrence rates. There is however currently no consensus on the use of mesh for the treatment of incarcerated hernias, especially in case of bowel resection. AIM: The aims of this study were (i) to report our current practice for the treatment of incarcerated hernias, (ii) to identify risk factors for postoperative complications, and (iii) to assess the safety of mesh placement in potentially infected surgical fields. METHODS: This retrospective study included 166 consecutive patients who underwent emergency surgery for incarcerated hernia between January 2007 and January 2012 in two university hospitals. Demographics, surgical details, and short-term outcome were collected. Univariate analysis was employed to identify risk factors for overall, infectious, and major complications. RESULTS: Eighty-four patients (50.6 %) presented inguinal hernias, 43 femoral (25.9 %), 37 umbilical hernias (22.3 %), and 2 mixed hernias (1.2 %), respectively. Mesh was placed in 64 patients (38.5 %), including 5 patients with concomitant bowel resection. Overall morbidity occurred in 56 patients (32.7 %), and 8 patients (4.8 %) developed surgical site infections (SSI). Univariate risk factors for overall complications were ASA grade 3/4 (P = 0.03), diabetes (P = 0.05), cardiopathy (P = 0.001), aspirin use (P = 0.023), and bowel resection (P = 0.001) which was also the only identified risk factor for SSI (P = 0.03). In multivariate analysis, only bowel incarceration was associated with a higher rate of major morbidity (OR = 14.04; P = 0.01). CONCLUSION: Morbidity after surgery for incarcerated hernia remains high and depends on comorbidities and surgical presentation. The use of mesh could become current practice even in case of bowel resection.
Resumo:
Percutaneous transluminal renal angioplasty (PTRA) is an invasive technique that is costly and involves the risk of complications and renal failure. The ability of PTRA to reduce the administration of antihypertensive drugs has been demonstrated. A potentially greater benefit, which nevertheless remains to be proven, is the deferral of the need for chronic dialysis. The aim of the study (ANPARIA) was to assess the appropriateness of PTRA to impact on the evolution of renal function. A standardized expert panel method was used to assess the appropriateness of medical treatment alone or medical treatment with revascularization in various clinical situations. The choice of revascularization by either PTRA or surgery was examined for each clinical situation. Analysis was based on a detailed literature review and on systematically elicited expert opinion, which were obtained during a two-round modified Delphi process. The study provides detailed responses on the appropriateness of PTRA for 1848 distinct clinical scenarios. Depending on the major clinical presentation, appropriateness of revascularization varied from 32% to 75% for individual scenarios (overal 48%). Uncertainty as to revascularization was 41% overall. When revascularization was appropriate, PTRA was favored over surgery in 94% of the scenarios, except in certain cases of aortic atheroma where sugery was the preferred choice. Kidney size [7 cm, absence of coexisting disease, acute renal failure, a high degree of stenosis (C70%), and absence of multiple arteries were identified as predictive variables of favorable appropriateness ratings. Situations such as cardiac failure with pulmonary edema or acute thrombosis of the renal artery were defined as indications for PTRA. This study identified clinical situations in which PTRA or surgery are appropriate for renal artery disease. We built a decision tree which can be used via Internet: the ANPARIA software (http://www.chu-clermontferrand.fr/anparia/). In numerous clinical situations uncertainty remains as to whether PTRA prevents deterioration of renal function.
Resumo:
Purpose: IOL centration and stability after cataract surgery is of high interest for cataract surgeons and IOL-producing companies. We present a new imaging software to evaluate the centration of the rhexis and the centration of the IOL after cataract surgery.Methods: We developed, in collaboration with the Biomedical Imaging Group (BIG), EPFL, Lausanne, a new working tool in order to assess precisely outcomes after IOL-implantation, such as ideal capsulorhexis and IOL-centration. The software is a plug-in of ImageJ, a general-purpose image processing and image-analysis package. The specifications of this software are: evaluation of the rhexis-centration and evaluation the position of the IOL in the posterior chamber. The end points are to analyze the quality of the centration of a rhexis after cataract surgery, the deformation of the rhexis with capsular bag retraction and the centration of the IOL after implantation.Results: This software delivers tools to interactively measure the distances between limbus, IOL and capsulorhexis and its changes over time. The user is invited to adjust nodes of three radial curves for the limbus, rhexis and the optic of the IOL. The radial distances of the curves are computed to evaluate the IOL implantation. The user is also able to define patterns for ideal capsulorhexis and optimal IOL-centration. We are going to present examples of calculations after cataract surgery.Conclusions: Evaluation of the centration of the rhexis and of the IOL after cataract surgery is an important end point for optimal IOL implantation after cataract surgery. Especially multifocal or accommodative lenses need a precise position in the bag with a good stability over time. This software is able to evaluate these parameters just after the surgery but also its changes over time. The results of these evaluations can lead to an optimizing of surgical procedures and materials.
Resumo:
Swain corrects the chi-square overidentification test (i.e., likelihood ratio test of fit) for structural equation models whethr with or without latent variables. The chi-square statistic is asymptotically correct; however, it does not behave as expected in small samples and/or when the model is complex (cf. Herzog, Boomsma, & Reinecke, 2007). Thus, particularly in situations where the ratio of sample size (n) to the number of parameters estimated (p) is relatively small (i.e., the p to n ratio is large), the chi-square test will tend to overreject correctly specified models. To obtain a closer approximation to the distribution of the chi-square statistic, Swain (1975) developed a correction; this scaling factor, which converges to 1 asymptotically, is multiplied with the chi-square statistic. The correction better approximates the chi-square distribution resulting in more appropriate Type 1 reject error rates (see Herzog & Boomsma, 2009; Herzog, et al., 2007).
Resumo:
Context: In the past 50 years, the use of prosthetic mesh in surgery has dramatically¦changed the management of primary, as well as incisional hernias. Currently, there¦are a large number of different mesh brands and no consensus on the best material,¦nor the best mesh implantation technique to use. The purpose of this study is to¦illustrate the adverse effects of intraperitoneal onlay mesh used for incisional¦hernia repair encountered in patients treated at CHUV for complications after¦incisional hernia repair.¦Materials & Methods: This work is an observational retrospective study. A PubMed¦search and a systematic review of literature were performed. Thereafter, the medical¦records of 22 patients who presented with pain, abdominal discomfort, ileus, fistula,¦abscess, seroma, mesh infection or recurrent incisional hernia after a laparoscopic or¦open repair with intra-abdominal mesh were reviewed.¦Results: Twenty-two persons were reoperated for complications after incisional¦hernia repair with a prosthetic mesh. Ten were male and twelve female, with a¦median age of 58,6 years (range 24-82). Mesh placement was performed by a¦laparoscopic approach in nine patients and by open approach in thirteen others.¦Eight different mesh brands were found (Ultrapro®, Mersilene®, Parietex Composite®,¦Proceed®, DynaMesh®, Gore® DualMesh®, Permacol®, Titanium Metals UK Ltd®).¦Mean time from implantation and reoperation for complication was 34.2 months¦(range 1-147). In our sample of 22 patients, 21 (96%) presented mesh adhesion and¦15 (68%) presented hernia recurrence. Others complications like mesh shrinkage,¦mesh migration, nerve entrapment, seroma, fistula and abscess were also evaluated.¦Conclusion: The majority of articles deal with complications induced by¦intraperitoneal prosthetic mesh, but the effectiveness of mesh has been studied¦mostly on experimental models. Actually and as shown in the present study,¦intraperitoneal mesh placement was associated with severe complications witch may¦potentially be life threatening. In our opinion, intraperitoneal mesh placement should¦only be reserved in exceptional situations, when the modified Rives-Stoppa could not¦be achieved and when tissues covering the mesh are insufficient.
Resumo:
Acute brain slices are slices of brain tissue that are kept vital in vitro for further recordings and analyses. This tool is of major importance in neurobiology and allows the study of brain cells such as microglia, astrocytes, neurons and their inter/intracellular communications via ion channels or transporters. In combination with light/fluorescence microscopies, acute brain slices enable the ex vivo analysis of specific cells or groups of cells inside the slice, e.g. astrocytes. To bridge ex vivo knowledge of a cell with its ultrastructure, we developed a correlative microscopy approach for acute brain slices. The workflow begins with sampling of the tissue and precise trimming of a region of interest, which contains GFP-tagged astrocytes that can be visualised by fluorescence microscopy of ultrathin sections. The astrocytes and their surroundings are then analysed by high resolution scanning transmission electron microscopy (STEM). An important aspect of this workflow is the modification of a commercial cryo-ultramicrotome to observe the fluorescent GFP signal during the trimming process. It ensured that sections contained at least one GFP astrocyte. After cryo-sectioning, a map of the GFP-expressing astrocytes is established and transferred to correlation software installed on a focused ion beam scanning electron microscope equipped with a STEM detector. Next, the areas displaying fluorescence are selected for high resolution STEM imaging. An overview area (e.g. a whole mesh of the grid) is imaged with an automated tiling and stitching process. In the final stitched image, the local organisation of the brain tissue can be surveyed or areas of interest can be magnified to observe fine details, e.g. vesicles or gold labels on specific proteins. The robustness of this workflow is contingent on the quality of sample preparation, based on Tokuyasu's protocol. This method results in a reasonable compromise between preservation of morphology and maintenance of antigenicity. Finally, an important feature of this approach is that the fluorescence of the GFP signal is preserved throughout the entire preparation process until the last step before electron microscopy.
Resumo:
Totally extraperitoneal laparoscopic hernia repair is an efficient but technically demanding procedure. As mechanisms of hernia recurrence may be related to these technical difficulties, we have modified a previously described double-mesh technique in an effort to simplify the procedure. Extraperitoneal laparoscopic hernia repairs were performed in 82 male and 17 female patients having inguinal, femoral, and recurrent bilateral hernias. A standard propylene mesh measuring 15 x 15 cm was cut into two pieces of 4 x 15 cm and 11 x 15 cm. The smaller mesh was placed over both inguinal rings without splitting. The larger mesh was then inserted over the first mesh and stapled to low-risk zones, reinforcing the large-vessel area and the nerve transition zone. The mean procedure duration was 60 minutes for unilateral and 100 minutes for bilateral hernia repair. Patients were discharged from the hospital within 48 hours. The mean postoperative follow-up was 22 months, with no recurrences, neuralgia, or bleeding complications. Over a 2-year period, this technique was found to be satisfactory without recurrences or significant complications. In our hands, this technique was easier to perform: it allows for a less than perfect positioning of the meshes and avoids most of the stapling to crucial zones.
Resumo:
SUMMARY: We present a tool designed for visualization of large-scale genetic and genomic data exemplified by results from genome-wide association studies. This software provides an integrated framework to facilitate the interpretation of SNP association studies in genomic context. Gene annotations can be retrieved from Ensembl, linkage disequilibrium data downloaded from HapMap and custom data imported in BED or WIG format. AssociationViewer integrates functionalities that enable the aggregation or intersection of data tracks. It implements an efficient cache system and allows the display of several, very large-scale genomic datasets. AVAILABILITY: The Java code for AssociationViewer is distributed under the GNU General Public Licence and has been tested on Microsoft Windows XP, MacOSX and GNU/Linux operating systems. It is available from the SourceForge repository. This also includes Java webstart, documentation and example datafiles.
Resumo:
Background: TIDratio indirectly reflects myocardial ischemia and is correlated with cardiacprognosis. We aimed at comparing the influence of three different softwarepackages for the assessment of TID using Rb-82 cardiac PET/CT. Methods: Intotal, data of 30 patients were used based on normal myocardial perfusion(SSS<3 and SRS<3) and stress myocardial blood flow 2mL/min/g)assessed by Rb-82 cardiac PET/CT. After reconstruction using 2D OSEM (2Iterations, 28 subsets), 3-D filtering (Butterworth, order=10, ωc=0.5), data were automatically processed, and then manually processed fordefining identical basal and apical limits on both stress and rest images.TIDratio were determined with Myometrix®, ECToolbox® and QGS®software packages. Comparisons used ANOVA, Student t-tests and Lin concordancetest (ρc). Results: All of the 90 processings were successfullyperformed. TID ratio were not statistically different between software packageswhen data were processed automatically (P=0.2) or manually (P=0.17). There was a slight, butsignificant relative overestimation of TID with automatic processing incomparison to manual processing using ECToolbox® (1.07 ± 0.13 vs 1.0± 0.13, P=0.001)and Myometrix® (1.07 ± 0.15 vs 1.01 ± 0.11, P=0.003) but not using QGS®(1.02 ±0.12 vs 1.05 ± 0.11, P=0.16). The best concordance was achieved between ECToolbox®and Myometrix® manual (ρc=0.67) processing.Conclusion: Using automatic or manual mode TID estimation was not significantlyinfluenced by software type. Using Myometrix® or ECToolbox®TID was significantly different between automatic and manual processing, butnot using QGS®. Software package should be account for when definingTID normal reference limits, as well as when used in multicenter studies. QGS®software seemed to be the most operator-independent software package, whileECToolbox® and Myometrix® produced the closest results.
Resumo:
The aim of this study was to determine the effect of using video analysis software on the interrater reliability of visual assessments of gait videos in children with cerebral palsy. Two clinicians viewed the same random selection of 20 sagittal and frontal video recordings of 12 children with cerebral palsy routinely acquired during outpatient rehabilitation clinics. Both observers rated these videos in a random sequence for each lower limb using the Observational Gait Scale, once with standard video software and another with video analysis software (Dartfish(®)) which can perform angle and timing measurements. The video analysis software improved interrater agreement, measured by weighted Cohen's kappas, for the total score (κ 0.778→0.809) and all of the items that required angle and/or timing measurements (knee position mid-stance κ 0.344→0.591; hindfoot position mid-stance κ 0.160→0.346; foot contact mid-stance κ 0.700→0.854; timing of heel rise κ 0.769→0.835). The use of video analysis software is an efficient approach to improve the reliability of visual video assessments.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.