929 resultados para human-computer visualization
Resumo:
Genetic and biochemical studies have suggested the existence of a bacteriophage-like, DNA-packaging/ejecting portal complex in herpesviruses capsids, but its arrangement remained unknown. Here, we report the first visualization of a unique vertex in the Kaposi's sarcoma-associated herpesvirus (KSHV) capsid by cryoelectron tomography, thus providing direct structural evidence for the existence of a portal complex in a gammaherpesvirus. This putative KSHV portal is an internally localized, umbilicated structure and lacks all of the external machineries characteristic of portals in DNA bacteriophages.
Resumo:
(1) A mathematical theory for computing the probabilities of various nucleotide configurations is developed, and the probability of obtaining the correct phylogenetic tree (model tree) from sequence data is evaluated for six phylogenetic tree-making methods (UPGMA, distance Wagner method, transformed distance method, Fitch-Margoliash's method, maximum parsimony method, and compatibility method). The number of nucleotides (m*) necessary to obtain the correct tree with a probability of 95% is estimated with special reference to the human, chimpanzee, and gorilla divergence. m* is at least 4,200, but the availability of outgroup species greatly reduces m* for all methods except UPGMA. m* increases if transitions occur more frequently than transversions as in the case of mitochondrial DNA. (2) A new tree-making method called the neighbor-joining method is proposed. This method is applicable either for distance data or character state data. Computer simulation has shown that the neighbor-joining method is generally better than UPGMA, Farris' method, Li's method, and modified Farris method on recovering the true topology when distance data are used. A related method, the simultaneous partitioning method, is also discussed. (3) The maximum likelihood (ML) method for phylogeny reconstruction under the assumption of both constant and varying evolutionary rates is studied, and a new algorithm for obtaining the ML tree is presented. This method gives a tree similar to that obtained by UPGMA when constant evolutionary rate is assumed, whereas it gives a tree similar to that obtained by the maximum parsimony tree and the neighbor-joining method when varying evolutionary rate is assumed. ^
Resumo:
Quantitative computer tomography (QCT)-based finite element (FE) models of vertebral body provide better prediction of vertebral strength than dual energy X-ray absorptiometry. However, most models were validated against compression of vertebral bodies with endplates embedded in polymethylmethalcrylate (PMMA). Yet, loading being as important as bone density, the absence of intervertebral disc (IVD) affects the strength. Accordingly, the aim was to assess the strength predictions of the classic FE models (vertebral body embedded) against the in vitro and in silico strengths of vertebral bodies loaded via IVDs. High resolution peripheral QCT (HR-pQCT) were performed on 13 segments (T11/T12/L1). T11 and L1 were augmented with PMMA and the samples were tested under a 4° wedge compression until failure of T12. Specimen-specific model was generated for each T12 from the HR-pQCT data. Two FE sets were created: FE-PMMA refers to the classical vertebral body embedded model under axial compression; FE-IVD to their loading via hyperelastic IVD model under the wedge compression as conducted experimentally. Results showed that FE-PMMA models overestimated the experimental strength and their strength prediction was satisfactory considering the different experimental set-up. On the other hand, the FE-IVD models did not prove significantly better (Exp/FE-PMMA: R²=0.68; Exp/FE-IVD: R²=0.71, p=0.84). In conclusion, FE-PMMA correlates well with in vitro strength of human vertebral bodies loaded via real IVDs and FE-IVD with hyperelastic IVDs do not significantly improve this correlation. Therefore, it seems not worth adding the IVDs to vertebral body models until fully validated patient-specific IVD models become available.
Resumo:
OBJECTIVES To find the best pairing of first and second reader at highest sensitivity for detecting lung nodules with CT at various dose levels. MATERIALS AND METHODS An anthropomorphic lung phantom and artificial lung nodules were used to simulate screening CT-examination at standard dose (100 mAs, 120 kVp) and 8 different low dose levels, using 120, 100 and 80 kVp combined with 100, 50 and 25 mAs. At each dose level 40 phantoms were randomly filled with 75 solid and 25 ground glass nodules (5-12 mm). Two radiologists and 3 different computer aided detection softwares (CAD) were paired to find the highest sensitivity. RESULTS Sensitivities at standard dose were 92%, 90%, 84%, 79% and 73% for reader 1, 2, CAD1, CAD2, CAD3, respectively. Combined sensitivity for human readers 1 and 2 improved to 97%, (p1=0.063, p2=0.016). Highest sensitivities--between 97% and 99.0%--were achieved by combining any radiologist with any CAD at any dose level. Combining any two CADs, sensitivities between 85% and 88% were significantly lower than for radiologists combined with CAD (p<0.03). CONCLUSIONS Combination of a human observer with any of the tested CAD systems provide optimal sensitivity for lung nodule detection even at reduced dose at 25 mAs/80 kVp.
Resumo:
Osteoporosis-related vertebral fractures represent a major health problem in elderly populations. Such fractures can often only be diagnosed after a substantial deformation history of the vertebral body. Therefore, it remains a challenge for clinicians to distinguish between stable and progressive potentially harmful fractures. Accordingly, novel criteria for selection of the appropriate conservative or surgical treatment are urgently needed. Computer tomography-based finite element analysis is an increasingly accepted method to predict the quasi-static vertebral strength and to follow up this small strain property longitudinally in time. A recent development in constitutive modeling allows us to simulate strain localization and densification in trabecular bone under large compressive strains without mesh dependence. The aim of this work was to validate this recently developed constitutive model of trabecular bone for the prediction of strain localization and densification in the human vertebral body subjected to large compressive deformation. A custom-made stepwise loading device mounted in a high resolution peripheral computer tomography system was used to describe the progressive collapse of 13 human vertebrae under axial compression. Continuum finite element analyses of the 13 compression tests were realized and the zones of high volumetric strain were compared with the experiments. A fair qualitative correspondence of the strain localization zone between the experiment and finite element analysis was achieved in 9 out of 13 tests and significant correlations of the volumetric strains were obtained throughout the range of applied axial compression. Interestingly, the stepwise propagating localization zones in trabecular bone converged to the buckling locations in the cortical shell. While the adopted continuum finite element approach still suffers from several limitations, these encouraging preliminary results towardsthe prediction of extended vertebral collapse may help in assessing fracture stability in future work.
Resumo:
BACKGROUND: To investigate if non-rigid image-registration reduces motion artifacts in triggered and non-triggered diffusion tensor imaging (DTI) of native kidneys. A secondary aim was to determine, if improvements through registration allow for omitting respiratory-triggering. METHODS: Twenty volunteers underwent coronal DTI of the kidneys with nine b-values (10-700 s/mm2 ) at 3 Tesla. Image-registration was performed using a multimodal nonrigid registration algorithm. Data processing yielded the apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA). For comparison of the data stability, the root mean square error (RMSE) of the fitting and the standard deviations within the regions of interest (SDROI ) were evaluated. RESULTS: RMSEs decreased significantly after registration for triggered and also for non-triggered scans (P < 0.05). SDROI for ADC, FA, and FP were significantly lower after registration in both medulla and cortex of triggered scans (P < 0.01). Similarly the SDROI of FA and FP decreased significantly in non-triggered scans after registration (P < 0.05). RMSEs were significantly lower in triggered than in non-triggered scans, both with and without registration (P < 0.05). CONCLUSION: Respiratory motion correction by registration of individual echo-planar images leads to clearly reduced signal variations in renal DTI for both triggered and particularly non-triggered scans. Secondarily, the results suggest that respiratory-triggering still seems advantageous.J. Magn. Reson. Imaging 2014. (c) 2014 Wiley Periodicals, Inc.
Resumo:
PURPOSE Radiolabelled glucagon-like peptide 1 (GLP-1) receptor agonists have recently been shown to successfully image benign insulinomas in patients. For the somatostatin receptor targeting of tumours, however, it was recently reported that antagonist tracers were superior to agonist tracers. The present study therefore evaluated various forms of the (125)iodinated-Bolton-Hunter (BH)-exendin(9-39) antagonist tracer for the in vitro visualization of GLP-1 receptor-expressing tissues in rats and humans and compared it with the agonist tracer (125)I-GLP-1(7-36)amide. METHODS Receptor autoradiography studies with (125)I-GLP-1(7-36)amide agonist or (125)I-BH-exendin(9-39) antagonist radioligands were performed in human and rat tissues. RESULTS The antagonist (125)I-BH-exendin(9-39) labelled at lysine 19 identifies all human and rat GLP-1 target tissues and GLP-1 receptor-expressing tumours. Binding is of high affinity and is comparable in all tested tissues in its binding properties with the agonist tracer (125)I-GLP-1(7-36)amide. For comparison, (125)I-BH-exendin(9-39) with the BH labelled at lysine 4 did identify the GLP-1 receptor in rat tissues but not in human tissues. CONCLUSION The GLP-1 receptor antagonist exendin(9-39) labelled with (125)I-BH at lysine 19 is an excellent GLP-1 radioligand that identifies human and rat GLP-1 receptors in normal and tumoural tissues. It may therefore be the molecular basis to develop suitable GLP-1 receptor antagonist radioligands for in vivo imaging of GLP-1 receptor-expressing tissues in patients.
Resumo:
The article proposes granular computing as a theoretical, formal and methodological basis for the newly emerging research field of human–data interaction (HDI). We argue that the ability to represent and reason with information granules is a prerequisite for data legibility. As such, it allows for extending the research agenda of HDI to encompass the topic of collective intelligence amplification, which is seen as an opportunity of today’s increasingly pervasive computing environments. As an example of collective intelligence amplification in HDI, we introduce a collaborative urban planning use case in a cognitive city environment and show how an iterative process of user input and human-oriented automated data processing can support collective decision making. As a basis for automated human-oriented data processing, we use the spatial granular calculus of granular geometry.
Resumo:
The rapid further development of computed tomography (CT) and magnetic resonance imaging (MRI) induced the idea to use these techniques for postmortem documentation of forensic findings. Until now, only a few institutes of forensic medicine have acquired experience in postmortem cross-sectional imaging. Protocols, image interpretation and visualization have to be adapted to the postmortem conditions. Especially, postmortem alterations, such as putrefaction and livores, different temperature of the corpse and the loss of the circulation are a challenge for the imaging process and interpretation. Advantages of postmortem imaging are the higher exposure and resolution available in CT when there is no concern for biologic effects of ionizing radiation, and the lack of cardiac motion artifacts during scanning. CT and MRI may become useful tools for postmortem documentation in forensic medicine. In Bern, 80 human corpses underwent postmortem imaging by CT and MRI prior to traditional autopsy until the month of August 2003. Here, we describe the imaging appearance of postmortem alterations--internal livores, putrefaction, postmortem clotting--and distinguish them from the forensic findings of the heart, such as calcification, endocarditis, myocardial infarction, myocardial scarring, injury and other morphological alterations.
Resumo:
Background: Sensor-based recordings of human movements are becoming increasingly important for the assessment of motor symptoms in neurological disorders beyond rehabilitative purposes. ASSESS MS is a movement recording and analysis system being developed to automate the classification of motor dysfunction in patients with multiple sclerosis (MS) using depth-sensing computer vision. It aims to provide a more consistent and finer-grained measurement of motor dysfunction than currently possible. Objective: To test the usability and acceptability of ASSESS MS with health professionals and patients with MS. Methods: A prospective, mixed-methods study was carried out at 3 centers. After a 1-hour training session, a convenience sample of 12 health professionals (6 neurologists and 6 nurses) used ASSESS MS to capture recordings of standardized movements performed by 51 volunteer patients. Metrics for effectiveness, efficiency, and acceptability were defined and used to analyze data captured by ASSESS MS, video recordings of each examination, feedback questionnaires, and follow-up interviews. Results: All health professionals were able to complete recordings using ASSESS MS, achieving high levels of standardization on 3 of 4 metrics (movement performance, lateral positioning, and clear camera view but not distance positioning). Results were unaffected by patients’ level of physical or cognitive disability. ASSESS MS was perceived as easy to use by both patients and health professionals with high scores on the Likert-scale questions and positive interview commentary. ASSESS MS was highly acceptable to patients on all dimensions considered, including attitudes to future use, interaction (with health professionals), and overall perceptions of ASSESS MS. Health professionals also accepted ASSESS MS, but with greater ambivalence arising from the need to alter patient interaction styles. There was little variation in results across participating centers, and no differences between neurologists and nurses. Conclusions: In typical clinical settings, ASSESS MS is usable and acceptable to both patients and health professionals, generating data of a quality suitable for clinical analysis. An iterative design process appears to have been successful in accounting for factors that permit ASSESS MS to be used by a range of health professionals in new settings with minimal training. The study shows the potential of shifting ubiquitous sensing technologies from research into the clinic through a design approach that gives appropriate attention to the clinic environment.
Resumo:
Femoroacetabular impingement (FAI) is a dynamic conflict of the hip defined by a pathological, early abutment of the proximal femur onto the acetabulum or pelvis. In the past two decades, FAI has received increasing focus in both research and clinical practice as a cause of hip pain and prearthrotic deformity. Anatomical abnormalities such as an aspherical femoral head (cam-type FAI), a focal or general overgrowth of the acetabulum (pincer-type FAI), a high riding greater or lesser trochanter (extra-articular FAI), or abnormal torsion of the femur have been identified as underlying pathomorphologies. Open and arthroscopic treatment options are available to correct the deformity and to allow impingement-free range of motion. In routine practice, diagnosis and treatment planning of FAI is based on clinical examination and conventional imaging modalities such as standard radiography, magnetic resonance arthrography (MRA), and computed tomography (CT). Modern software tools allow three-dimensional analysis of the hip joint by extracting pelvic landmarks from two-dimensional antero-posterior pelvic radiographs. An object-oriented cross-platform program (Hip2Norm) has been developed and validated to standardize pelvic rotation and tilt on conventional AP pelvis radiographs. It has been shown that Hip2Norm is an accurate, consistent, reliable and reproducible tool for the correction of selected hip parameters on conventional radiographs. In contrast to conventional imaging modalities, which provide only static visualization, novel computer assisted tools have been developed to allow the dynamic analysis of FAI pathomechanics. In this context, a validated, CT-based software package (HipMotion) has been introduced. HipMotion is based on polygonal three-dimensional models of the patient’s pelvis and femur. The software includes simulation methods for range of motion, collision detection and accurate mapping of impingement areas. A preoperative treatment plan can be created by performing a virtual resection of any mapped impingement zones both on the femoral head-neck junction, as well as the acetabular rim using the same three-dimensional models. The following book chapter provides a summarized description of current computer-assisted tools for the diagnosis and treatment planning of FAI highlighting the possibility for both static and dynamic evaluation, reliability and reproducibility, and its applicability to routine clinical use.
Resumo:
The question concerning the circumstances under which it is advantageous for a company to outsource certain information systems functions has been a controversial issue for the last decade. While opponents emphasize the risks of outsourcing based on the loss of strategic potentials and increased transaction costs, proponents emphasize the strategic benefits of outsourcing and high potentials of cost-savings. This paper brings together both views by examining the conditions under which both the strategic potentials as well as savings in production and transaction costs of developing and maintaining software applications can better be achieved in-house as opposed to by an external vendor. We develop a theoretical framework from three complementary theories and test it empirically based on a mail survey of 139 German companies. The results show that insourcing is more cost efficient and advantageous in creating strategic benefits through IS if the provision of application services requires a high amount of firm specific human assets. These relationships, however, are partially moderated by differences in the trustworthiness and intrinsic motivation of internal versus external IS professionals. Moreover, capital shares with an external vendor can lower the risk of high transaction costs as well the risk of loosing the strategic opportunities of an IS.
Resumo:
The evolution of wireless access technologies and mobile devices, together with the constant demand for video services, has created new Human-Centric Multimedia Networking (HCMN) scenarios. However, HCMN poses several challenges for content creators and network providers to deliver multimedia data with an acceptable quality level based on the user experience. Moreover, human experience and context, as well as network information play an important role in adapting and optimizing video dissemination. In this paper, we discuss trends to provide video dissemination with Quality of Experience (QoE) support by integrating HCMN with cloud computing approaches. We identified five trends coming from such integration, namely Participatory Sensor Networks, Mobile Cloud Computing formation, QoE assessment, QoE management, and video or network adaptation.