985 resultados para advanced techniques
Resumo:
The integrity of multi-component structures is usually determined by their unions. Adhesive-bonding is often used over traditional methods because of the reduction of stress concentrations, reduced weight penalty, and easy manufacturing. Commercial adhesives range from strong and brittle (e.g., Araldite® AV138) to less strong and ductile (e.g., Araldite® 2015). A new family of polyurethane adhesives combines high strength and ductility (e.g., Sikaforce® 7888). In this work, the performance of the three above-mentioned adhesives was tested in single lap joints with varying values of overlap length (LO). The experimental work carried out is accompanied by a detailed numerical analysis by finite elements, either based on cohesive zone models (CZM) or the extended finite element method (XFEM). This procedure enabled detailing the performance of these predictive techniques applied to bonded joints. Moreover, it was possible to evaluate which family of adhesives is more suited for each joint geometry. CZM revealed to be highly accurate, except for largely ductile adhesives, although this could be circumvented with a different cohesive law. XFEM is not the most suited technique for mixed-mode damage growth, but a rough prediction was achieved.
Resumo:
Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.
Resumo:
The International Society for Clinical Densitometry (ISCD) has developed new official positions for the clinical use of computed tomography (CT) scans acquired without a calibration phantom, for example, CT scans obtained for other diagnosis such as colonography. This also addresses techniques suggested for opportunistic screening of osteoporosis. The ISCD task force for quantitative CT reviewed the evidence for clinical applications of these new techniques and presented a report with recommendations at the 2015 ISCD Position Development Conference. Here we discuss the agreed upon ISCD official positions with supporting medical evidence, rationale, controversy, and suggestions for further study. Advanced techniques summarized as statistical parameter mapping methods were also reviewed. Their future use is promising but the clinical application is premature. The clinical use of QCT of the hip is addressed in part I and of finite element analysis of the hip and spine in part II.
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
Lectures about the course module "Advanced techniques for the human eye study: ocular aberrometry".
Resumo:
This thesis presents the experimental investigation into two novel techniques which can be incorporated into current optical systems. These techniques have the capability to improve the performance of transmission and the recovery of the transmitted signal at the receiver. The experimental objectives are described and the results for each technique are presented in two sections: The first experimental section is on work related to Ultra-long Raman Fibre lasers (ULRFLs). The fibre lasers have become an important research topic in recent years due to the significant improvement they give over lumped Raman amplification and their potential use in the development of system with large bandwidths and very low losses. The experiments involved the use of ASK and DPSK modulation types over a distance of 240km and DPSK over a distance of 320km. These results are compared to the current state of-the-art and against other types of ultra-long transmission amplification techniques. The second technique investigated involves asymmetrical, or offset, filtering. This technique is important because it deals with the strong filtering regimes that are a part of optical systems and networks in modern high-speed communications. It allows the improvement of the received signal by offsetting the central frequency of a filter after the output of a Delay Line Interferometer (DLI), which induces significant improvement in BER and/or Qvalues at the receiver and therefore an increase in signal quality. The experimental results are then concluded against the objectives of the experimental work and potential future work discussed.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Introduction: Difficult tracheal intubation remains a constant and significant source of morbidity and mortality in anaesthetic practice. Insufficient airway assessment in the preoperative period continues to be a major cause of unanticipated difficult intubation. Although many risk factors have already been identified, preoperative airway evaluation is not always regarded as a standard procedure and the respective weight of each risk factor remains unclear. Moreover the predictive scores available are not sensitive, moderately specific and often operator-dependant. In order to improve the preoperative detection of patients at risk for difficult intubation, we developed a system for automated and objective evaluation of morphologic criteria of the face and neck using video recordings and advanced techniques borrowed from face recognition. Method and results: Frontal video sequences were recorded in 5 healthy volunteers. During the video recording, subjects were requested to perform maximal flexion-extension of the neck and to open wide the mouth with tongue pulled out. A robust and real-time face tracking system was then applied, allowing to automatically identify and map a grid of 55 control points on the face, which were tracked during head motion. These points located important features of the face, such as the eyebrows, the nose, the contours of the eyes and mouth, and the external contours, including the chin. Moreover, based on this face tracking, the orientation of the head could also be estimated at each frame of the video sequence. Thus, we could infer for each frame the pitch angle of the head pose (related to the vertical rotation of the head) and obtain the degree of head extension. Morphological criteria used in the most frequent cited predictive scores were also extracted, such as mouth opening, degree of visibility of the uvula or thyreo-mental distance. Discussion and conclusion: Preliminary results suggest the high feasibility of the technique. The next step will be the application of the same automated and objective evaluation to patients who will undergo tracheal intubation. The difficulties related to intubation will be then correlated to the biometric characteristics of the patients. The objective in mind is to analyze the biometrics data with artificial intelligence algorithms to build a highly sensitive and specific predictive test.
Resumo:
PURPOSE Hodgkin lymphoma (HL) is a highly curable disease. Reducing late complications and second malignancies has become increasingly important. Radiotherapy target paradigms are currently changing and radiotherapy techniques are evolving rapidly. DESIGN This overview reports to what extent target volume reduction in involved-node (IN) and advanced radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT) and proton therapy-compared with involved-field (IF) and 3D radiotherapy (3D-RT)- can reduce high doses to organs at risk (OAR) and examines the issues that still remain open. RESULTS Although no comparison of all available techniques on identical patient datasets exists, clear patterns emerge. Advanced dose-calculation algorithms (e.g., convolution-superposition/Monte Carlo) should be used in mediastinal HL. INRT consistently reduces treated volumes when compared with IFRT with the exact amount depending on the INRT definition. The number of patients that might significantly benefit from highly conformal techniques such as IMRT over 3D-RT regarding high-dose exposure to organs at risk (OAR) is smaller with INRT. The impact of larger volumes treated with low doses in advanced techniques is unclear. The type of IMRT used (static/rotational) is of minor importance. All advanced photon techniques result in similar potential benefits and disadvantages, therefore only the degree-of-modulation should be chosen based on individual treatment goals. Treatment in deep inspiration breath hold is being evaluated. Protons theoretically provide both excellent high-dose conformality and reduced integral dose. CONCLUSION Further reduction of treated volumes most effectively reduces OAR dose, most likely without disadvantages if the excellent control rates achieved currently are maintained. For both IFRT and INRT, the benefits of advanced radiotherapy techniques depend on the individual patient/target geometry. Their use should therefore be decided case by case with comparative treatment planning.
Resumo:
The history of cerebral aneurysm surgery owes a great tribute to the tenacity of pioneering neurosurgeons who designed and developed the clips used to close the aneurysms neck. However, until the beginning of the past century, surgery of complex and challenging aneurysms was impossible due to the lack of surgical microscope and commercially available sophisticated clips. The modern era of the spring clips began in the second half of last century. Until then, only malleable metal clips and other non-metallic materials were available for intracranial aneurysms. Indeed, the earliest clips were hazardous and difficult to handle. Several neurosurgeons put their effort in developing new clip models, based on their personal experience in the treatment of cerebral aneurysms. Finally, the introduction of the surgical microscope, together with the availability of more sophisticated clips, has allowed the treatment of complex and challenging aneurysms. However, today none of the new instruments or tools for surgical therapy of aneurysms could be used safely and effectively without keeping in mind the lessons on innovative surgical techniques provided by great neurovascular surgeons. Thanks to their legacy, we can now treat many types of aneurysms that had always been considered inoperable. In this article, we review the basic principles of surgical clipping and illustrate some more advanced techniques to be used for complex aneurysms.
Resumo:
As wireless network technologies evolve towards an All-IP framework, Next Generation Wireless Communication Devices demand better use of spectral resources by employing advanced techniques of silence suppression. This paper presents an analysis of VoIP call data and compares the statistical results based on observed patterns of talk spurts and silence lengths to those achieved by a modified on-off voice model for silence suppression in wireless networks. As talk spurts and silence lengths are sensitive to varying word lengths, temporal structure and other prosodic aspects of speech, the impact of the use of various languages, dialects and gender of speakers on these results is also assessed.
Resumo:
This thesis attempts a psychological investigation of hemispheric functioning in developmental dyslexia. Previous work using neuropsychological methods with developmental dyslexics is reviewed ,and original work is presented both of a conventional psychometric nature and also utilising a new means of intervention. At the inception of inquiry into dyslexia, comparisons were drawn between developmental dyslexia and acquired alexia, promoting a model of brain damage as the common cause. Subsequent investigators found developmental dyslexics to be neurologically intact, and so an alternative hypothesis was offered, namely that language is abnormally localized (not in the left hemisphere). Research in the last decade, using the advanced techniques of modern neuropsychology, has indicated that developmental dyslexics are probably left hemisphere dominant for language. The development of a new type of pharmaceutical prep~ration (that appears to have a left hemisphere effect) offers an oppertunity to test the experimental hypothesis. This hypothesis propounds that most dyslexics are left hemisphere language dominant, but some of these language related operations are dysfunctioning. The methods utilised are those of psychological assessment of cognitive function, both in a traditional psychometric situation, and with a new form of intervention (Piracetam). The information resulting from intervention will be judged on its therapeutic validity and contribution to the understanding of hemispheric functioning in dyslexics. The experimental studies using conventional psychometric evaluation revealed a dyslexic profile of poor sequencing and name coding ability, with adequate spatial and verbal reasoning skills. Neuropsychological information would tend to suggest that this profile was indicative of adequate right hemsiphere abilities and deficits in some left hemsiphere abilities. When an intervention agent (Piracetam) was used with young adult dyslexics there were improvements in both the rate of acquisition and conservation of verbal learning. An experimental study with dyslexic children revealed that Piracetam appeared to improve reading, writing and sequencing, but did not influence spatial abilities. This would seem to concord with other recent findings, that deve~mental dyslexics may have left hemisphere language localisation, although some of these language related abilities are dysfunctioning.
Resumo:
Background: Recent advances in laparoscopic devices and experience with advanced techniques have increased the indications for laparoscopic liver. Aim: The aim of this work was to present a video with technical aspects of a pure laparoscopic left hemi-hepatectomy (segments 2, 3, and 4) by using the intrahepatic Glissonian approach and control of venous outflow without hilar dissection or the Pringle maneuver. Patient and Method: A 63-year-old woman with a 5-cm solitary liver metastasis was referred for treatment. Four trocars were used. The left lobe was pulled upward and the lesser omentum was divided, exposing Arantius' ligament. This ligament is a useful landmark for the identification of the main left Glissonian pedicle. A small anterior incision was made in front of the hilum, and a large clamp was introduced behind the Arantius' ligament toward the anterior incision, allowing control of the left main sheath. Ischemic discoloration of the left liver was achieved and marked with cautery. The vascular clamp was replaced by a stapler. If ischemic delineation was coincident with a previously marked area, the stapler was fired. The left hepatic vein was dissected and encircled. Parenchymal transection and vascular control of the hepatic veins were accomplished with a Harmonic scalpel and an endoscopic stapling device, as appropriate. All these steps were performed without the Pringle maneuver and without hand assistance. Results: Operative time was 220 minutes with minimum blood loss. Hospital stay was 4 days. Pathology showed free surgical margins. The patient is alive with no signs of recurrence 18 months after the operation. Conclusion: Totally laparoscopic left hemihepatectomy is safe and feasible in selected patients and should be considered for patients with benign or malignant liver neoplasms. The described technique, with the use of the intrahepatic Glissonian approach and control of venous outflow, may facilitate laparoscopic left hemihepatectomy by reducing the technical difficulties in pedicle control and may decrease bleeding during liver transection.
Resumo:
One of the most important recent improvements in cardiology is the use of ventricular assist devices (VADs) to help patients with severe heart diseases, especially when they are indicated to heart transplantation. The Institute Dante Pazzanese of Cardiology has been developing an implantable centrifugal blood pump that will be able to help a sick human heart to keep blood flow and pressure at physiological levels. This device will be used as a totally or partially implantable VAD. Therefore, an improvement on device performance is important for the betterment of the level of interaction with patient`s behavior or conditions. But some failures may occur if the device`s pumping control does not follow the changes in patient`s behavior or conditions. The VAD control system must consider tolerance to faults and have a dynamic adaptation according to patient`s cardiovascular system changes, and also must attend to changes in patient conditions, behavior, or comportments. This work proposes an application of the mechatronic approach to this class of devices based on advanced techniques for control, instrumentation, and automation to define a method for developing a hierarchical supervisory control system that is able to perform VAD control dynamically, automatically, and securely. For this methodology, we used concepts based on Bayesian network for patients` diagnoses, Petri nets to generate a VAD control algorithm, and Safety Instrumented Systems to ensure VAD system security. Applying these concepts, a VAD control system is being built for method effectiveness confirmation.