965 resultados para diagnostic techniques and procedure
Resumo:
BACKGROUND Skin patch test is the gold standard method in diagnosing contact allergy. Although used for more than 100 years, the patch test procedure is performed with variability around the world. A number of factors can influence the test results, namely the quality of reagents used, the timing of the application, the patch test series (allergens/haptens) that have been used for testing, the appropriate interpretation of the skin reactions or the evaluation of the patient's benefit. METHODS We performed an Internet -based survey with 38 questions covering the educational background of respondents, patch test methods and interpretation. The questionnaire was distributed among all representatives of national member societies of the World Allergy Organization (WAO), and the WAO Junior Members Group. RESULTS One hundred sixty-nine completed surveys were received from 47 countries. The majority of participants had more than 5 years of clinical practice (61 %) and routinely carried out patch tests (70 %). Both allergists and dermatologists were responsible for carrying out the patch tests. We could observe the use of many different guidelines regardless the geographical distribution. The use of home-made preparations was indicated by 47 % of participants and 73 % of the respondents performed 2 or 3 readings. Most of the responders indicated having patients with adverse reactions, including erythroderma (12 %); however, only 30 % of members completed a consent form before conducting the patch test. DISCUSSION The heterogeneity of patch test practices may be influenced by the level of awareness of clinical guidelines, different training backgrounds, accessibility to various types of devices, the patch test series (allergens/haptens) used for testing, type of clinical practice (public or private practice, clinical or research-based institution), infrastructure availability, financial/commercial implications and regulations among others. CONCLUSION There is a lack of a worldwide homogeneity of patch test procedures, and this raises concerns about the need for standardization and harmonization of this important diagnostic procedure.
Resumo:
OBJECTIVES: Laboratory detection of vancomycin-intermediate Staphylococcus aureus (VISA) and their heterogeneous VISA (hVISA) precursors is difficult. Thus, it is possible that vancomycin failures against supposedly vancomycin-susceptible S. aureus are due to undiagnosed VISA or hVISA. We tested this hypothesis in experimental endocarditis.¦METHODS: Rats with aortic valve infection due to the vancomycin-susceptible (MIC 2 mg/L), methicillin-resistant S. aureus M1V2 were treated for 2 days with doses of vancomycin that mimicked the pharmacokinetics seen in humans following intravenous administration of 1 g of the drug every 12 h. Half of the treated animals were killed 8 h after treatment arrest and half 3 days thereafter. Population analyses were done directly on vegetation homogenates or after one subculture in drug-free medium to mimic standard diagnostic procedures.¦RESULTS: Vancomycin cured 14 of 26 animals (54%; P<0.05 versus controls) after 2 days of treatment. When vegetation homogenates were plated directly on vancomycin-containing plates, 6 of 13 rats killed 8 h after treatment arrest had positive cultures, 1 of which harboured hVISA. Likewise, 6 of 13 rats killed 3 days thereafter had positive valve cultures, 5 of which harboured hVISA. However, one subculture of vegetations in drug-free broth was enough to revert all the hVISA phenotypes to the susceptible pattern of the parent. Thus, vancomycin selected for hVISA during therapy of experimental endocarditis due to vancomycin-susceptible S. aureus. These hVISA were associated with vancomycin failure. The hVISA phenotype persisted in vivo, even after vancomycin arrest, but was missed in vitro after a single passage of the vegetation homogenate on drug-free medium.¦CONCLUSIONS: hVISA might escape detection in clinical samples if they are subcultured before susceptibility tests.
Resumo:
The structural modeling of spatial dependence, using a geostatistical approach, is an indispensable tool to determine parameters that define this structure, applied on interpolation of values at unsampled points by kriging techniques. However, the estimation of parameters can be greatly affected by the presence of atypical observations in sampled data. The purpose of this study was to use diagnostic techniques in Gaussian spatial linear models in geostatistics to evaluate the sensitivity of maximum likelihood and restrict maximum likelihood estimators to small perturbations in these data. For this purpose, studies with simulated and experimental data were conducted. Results with simulated data showed that the diagnostic techniques were efficient to identify the perturbation in data. The results with real data indicated that atypical values among the sampled data may have a strong influence on thematic maps, thus changing the spatial dependence structure. The application of diagnostic techniques should be part of any geostatistical analysis, to ensure a better quality of the information from thematic maps.
Resumo:
Our inability to adequately treat many patients with refractory epilepsy caused by focal cortical dysplasia (FCD), surgical inaccessibility and failures are significant clinical drawbacks. The targeting of physiologic features of epileptogenesis in FCD and colocalizing functionality has enhanced completeness of surgical resection, the main determinant of outcome. Electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and magnetoencephalography are helpful in guiding electrode implantation and surgical treatment, and high-frequency oscillations help defining the extent of the epileptogenic dysplasia. Ultra high-field MRI has a role in understanding the laminar organization of the cortex, and fluorodeoxyglucose-positron emission tomography (FDG-PET) is highly sensitive for detecting FCD in MRI-negative cases. Multimodal imaging is clinically valuable, either by improving the rate of postoperative seizure freedom or by reducing postoperative deficits. However, there is no level 1 evidence that it improves outcomes. Proof for a specific effect of antiepileptic drugs (AEDs) in FCD is lacking. Pathogenic mutations recently described in mammalian target of rapamycin (mTOR) genes in FCD have yielded important insights into novel treatment options with mTOR inhibitors, which might represent an example of personalized treatment of epilepsy based on the known mechanisms of disease. The ketogenic diet (KD) has been demonstrated to be particularly effective in children with epilepsy caused by structural abnormalities, especially FCD. It attenuates epigenetic chromatin modifications, a master regulator for gene expression and functional adaptation of the cell, thereby modifying disease progression. This could imply lasting benefit of dietary manipulation. Neurostimulation techniques have produced variable clinical outcomes in FCD. In widespread dysplasias, vagus nerve stimulation (VNS) has achieved responder rates >50%; however, the efficacy of noninvasive cranial nerve stimulation modalities such as transcutaneous VNS (tVNS) and noninvasive (nVNS) requires further study. Although review of current strategies underscores the serious shortcomings of treatment-resistant cases, initial evidence from novel approaches suggests that future success is possible.
Resumo:
Abstract Objective: To determine the rates of diagnostic underestimation at stereotactic percutaneous core needle biopsies (CNB) and vacuum-assisted biopsies (VABB) of nonpalpable breast lesions, with histopathological results of atypical ductal hyperplasia (ADH) or ductal carcinoma in situ (DCIS) subsequently submitted to surgical excision. As a secondary objective, the frequency of ADH and DCIS was determined for the cases submitted to biopsy. Materials and Methods: Retrospective review of 40 cases with diagnosis of ADH or DCIS on the basis of biopsies performed between February 2011 and July 2013, subsequently submitted to surgery, whose histopathological reports were available in the internal information system. Biopsy results were compared with those observed at surgery and the underestimation rate was calculated by means of specific mathematical equations. Results: The underestimation rate at CNB was 50% for ADH and 28.57% for DCIS, and at VABB it was 25% for ADH and 14.28% for DCIS. ADH represented 10.25% of all cases undergoing biopsy, whereas DCIS accounted for 23.91%. Conclusion: The diagnostic underestimation rate at CNB is two times the rate at VABB. Certainty that the target has been achieved is not the sole determining factor for a reliable diagnosis. Removal of more than 50% of the target lesion should further reduce the risk of underestimation.
Resumo:
Forty-one wild house mice (Mus musculus) were trapped in an urban area, near railways, in Santa Fe city, Argentina. Both kidneys from each mouse were removed for bacteriological and histological examination. One kidney was inoculated into Fletcher semi-solid medium and isolates were serologically typed. The other kidney was microscopically examined after hematoxylin-eosin, silver impregnation and immunohistochemical stains. Leptospires, all of them belonging to the Ballum serogroup, were isolated from 16 (39%) out of 41 samples. The presence of the agent was recorded in 18 (44%) and in 19 (46%) out of 41 silver impregnated and immunohistochemically stained samples respectively. Additionally, leptospires were detected in high number on the apical surface of epithelial cells and in the lumen of medullary tubules and they were less frequently seen on the apical surface of epithelial cells or in the lumen of the cortical tubules, which represents an unusual finding in carrier animals. Microscopic lesions consisting of focal mononuclear interstitial nephritis, glomerular shrinkage and desquamation of tubular epithelial cells were observed in 13 of 19 infected and in 10 of 22 non-infected mice; differences in presence of lesions between infected and non-infected animals were not statistically significant (P=0,14). The three techniques, culture, silver impregnation and immunohistochemistry, had a high agreement (k³0.85) and no significant differences between them were detected (P>0.05). In addition, an unusual location of leptospires in kidneys of carrier animals was reported, but a relationship between lesions and presence of leptospires could not be established.
Resumo:
In this paper, the optimum design of 3R manipulators is formulated and solved by using an algebraic formulation of workspace boundary. A manipulator design can be approached as a problem of optimization, in which the objective functions are the size of the manipulator and workspace volume; and the constrains can be given as a prescribed workspace volume. The numerical solution of the optimization problem is investigated by using two different numerical techniques, namely, sequential quadratic programming and simulated annealing. Numerical examples illustrate a design procedure and show the efficiency of the proposed algorithms.
Resumo:
Modeling of spatial dependence structure, concerning geoestatistics approach, is an indispensable tool for fixing parameters that define this structure, applied on interpolation of values in places that are not sampled, by kriging techniques. However, the estimation of parameters can be greatly affected by the presence of atypical observations on sampled data. Thus, this trial aimed at using diagnostics techniques of local influence in spatial linear Gaussians models, applied at geoestatistics in order to evaluate sensitivity of maximum likelihood estimators and restrict maximum likelihood to small perturbations in these data. So, studies with simulated and experimental data were performed. Those results, obtained from the study of real data, allowed us to conclude that the presence of atypical values among the sampled data can have a strong influence on thematic maps, changing, therefore, the spatial dependence. The application of diagnostics techniques of local influence should be part of any geoestatistic analysis, ensuring that the information from thematic maps has better quality and can be used with greater security by farmers.
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
Over the past decades, major progress in patient selection, surgical techniques and anaesthetic management have largely contributed to improved outcome in lung cancer surgery. The purpose of this study was to identify predictors of post-operative cardiopulmonary morbidity in patients with a forced expiratory volume in 1 s <80% predicted, who underwent cardiopulmonary exercise testing (CPET). In this observational study, 210 consecutive patients with lung cancer underwent CPET with completed data over a 9-yr period (2001-2009). Cardiopulmonary complications occurred in 46 (22%) patients, including four (1.9%) deaths. On logistic regression analysis, peak oxygen uptake (peak V'(O₂) and anaesthesia duration were independent risk factors of both cardiovascular and pulmonary complications; age and the extent of lung resection were additional predictors of cardiovascular complications, whereas tidal volume during one-lung ventilation was a predictor of pulmonary complications. Compared with patients with peak V'(O₂) >17 mL·kg⁻¹·min⁻¹, those with a peak V'(O₂) <10 mL·kg⁻¹·min⁻¹ had a four-fold higher incidence of cardiac and pulmonary morbidity. Our data support the use of pre-operative CPET and the application of an intra-operative protective ventilation strategy. Further studies should evaluate whether pre-operative physical training can improve post-operative outcome.
Resumo:
Mechanical thrombectomy provides higher recanalization rates than intravenous or intra-arterial thrombolysis. Finally this has been shown to translate into improved clinical outcome in six multicentric randomized controlled trials. However, within cohorts the clinical outcomes may vary, depending on the endovascular techniques applied. Systems aiming mainly for thrombus fragmentation and lacking a protection against distal embolization have shown disappointing results when compared to recent stent-retriever studies or even to historical data on local arterial fibrinolysis. Procedure-related embolic events are usually graded as adverse events in interventional neuroradiology. In stroke, however, the clinical consequences of secondary emboli have so far mostly been neglected and attributed to progression of the stroke itself. We summarize the evolution of instruments and techniques for endovascular, image-guided, microneurosurgical recanalization in acute stroke, and discuss how to avoid procedure-related embolic complications.
Resumo:
This study establishes the extent and relevance of bias of population estimates of prevalence, incidence, and intensity of infection with Schistosoma mansoni caused by the relative sensitivity of stool examination techniques. The population studied was Parcelas de Boqueron in Las Piedras, Puerto Rico, where the Centers for Disease Control, had undertaken a prospective community-based study of infection with S. mansoni in 1972. During each January of the succeeding years stool specimens from this population were processed according to the modified Ritchie concentration (MRC) technique. During January 1979 additional stool specimens were collected from 30 individuals selected on the basis of their mean S. mansoni egg output during previous years. Each specimen was divided into ten 1-gm aliquots and three 42-mg aliquots. The relationship of egg counts obtained with the Kato-Katz (KK) thick smear technique as a function of the mean of ten counts obtained with the MRC technique was established by means of regression analysis. Additionally, the effect of fecal sample size and egg excretion level on technique sensitivity was evaluated during a blind assessment of single stool specimen samples, using both examination methods, from 125 residents with documented S. mansoni infections. The regression equation was: Ln KK = 2.3324 + 0.6319 Ln MRC, and the coefficient of determination (r('2)) was 0.73. The regression equation was then utilized to correct the term "m" for sample size in the expression P ((GREATERTHEQ) 1 egg) = 1 - e('-ms), which estimates the probability P of finding at least one egg as a function of the mean S. mansoni egg output "m" of the population and the effective stool sample size "s" utilized by the coprological technique. This algorithm closely approximated the observed sensitivity of the KK and MRC tests when these were utilized to blindly screen a population of known parasitologic status for infection with S. mansoni. In addition, the algorithm was utilized to adjust the apparent prevalence of infection for the degree of functional sensitivity exhibited by the diagnostic test. This permitted the estimation of true prevalence of infection and, hence, a means for correcting estimates of incidence of infection. ^