43 resultados para Curricula representation and visualization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to explore the perception of the legal authorities regarding different report types and visualization techniques for post-mortem radiological findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To prospectively evaluate whether intravenous morphine co-medication improves bile duct visualization of dual-energy CT-cholangiography. MATERIALS AND METHODS: Forty potential donors for living-related liver transplantation underwent CT-cholangiography with infusion of a hepatobiliary contrast agent over 40min. Twenty minutes after the beginning of the contrast agent infusion, either normal saline (n=20 patients; control group [CG]) or morphine sulfate (n=20 patients; morphine group [MG]) was injected. Forty-five minutes after initiation of the contrast agent, a dual-energy CT acquisition of the liver was performed. Applying dual-energy post-processing, pure iodine images were generated. Primary study goals were determination of bile duct diameters and visualization scores (on a scale of 0 to 3: 0-not visualized; 3-excellent visualization). RESULTS: Bile duct visualization scores for second-order and third-order branch ducts were significantly higher in the MG compared to the CG (2.9±0.1 versus 2.6±0.2 [P<0.001] and 2.7±0.3 versus 2.1±0.6 [P<0.01], respectively). Bile duct diameters for the common duct and main ducts were significantly higher in the MG compared to the CG (5.9±1.3mm versus 4.9±1.3mm [P<0.05] and 3.7±1.3mm versus 2.6±0.5mm [P<0.01], respectively). CONCLUSION: Intravenous morphine co-medication significantly improved biliary visualization on dual-energy CT-cholangiography in potential donors for living-related liver transplantation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The calculation of projection structures (PSs) from Protein Data Bank (PDB)-coordinate files of membrane proteins is not well-established. Reports on such attempts exist but are rare. In addition, the different procedures are barely described and thus difficult if not impossible to reproduce. Here we present a simple, fast and well-documented method for the calculation and visualization of PSs from PDB-coordinate files of membrane proteins: the projection structure visualization (PSV)-method. The PSV-method was successfully validated using the PS of aquaporin-1 (AQP1) from 2D crystals and cryo-transmission electron microscopy, and the PDB-coordinate file of AQP1 determined from 3D crystals and X-ray crystallography. Besides AQP1, which is a relatively rigid protein, we also studied a flexible membrane transport protein, i.e. the L-arginine/agmatine antiporter AdiC. Comparison of PSs calculated from the existing PDB-coordinate files of substrate-free and L-arginine-bound AdiC indicated that conformational changes are detected in projection. Importantly, structural differences were found between the PSV-method calculated PSs of the detergent-solubilized AdiC proteins and the PS from cryo-TEM of membrane-embedded AdiC. These differences are particularly exciting since they may reflect a different conformation of AdiC induced by the lateral pressure in the lipid bilayer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using postmortem multislice computed tomography (MSCT) and magnetic resonance imaging (MRI), 40 forensic cases were examined and findings were verified by subsequent autopsy. Results were classified as follows: (I) cause of death, (II) relevant traumatological and pathological findings, (III) vital reactions, (IV) reconstruction of injuries, (V) visualization. In these 40 forensic cases, 47 partly combined causes of death were diagnosed at autopsy, 26 (55%) causes of death were found independently using only radiological image data. Radiology was superior to autopsy in revealing certain cases of cranial, skeletal, or tissue trauma. Some forensic vital reactions were diagnosed equally well or better using MSCT/MRI. Radiological imaging techniques are particularly beneficial for reconstruction and visualization of forensic cases, including the opportunity to use the data for expert witness reports, teaching, quality control, and telemedical consultation. These preliminary results, based on the concept of "virtopsy," are promising enough to introduce and evaluate these radiological techniques in forensic medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the increasing amount of data, knowledge aggregation, representation and reasoning are highly important for companies. In this paper, knowledge aggregation is presented as the first step. In the sequel, successful knowledge representation, for instance through graphs, enables knowledge-based reasoning. There exist various forms of knowledge representation through graphs; some of which allow to handle uncertainty and imprecision by invoking the technology of fuzzy sets. The paper provides an overview of different types of graphs stressing their relationships and their essential features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In several studies it was shown that metacognitive ability is crucial for children and their success in school. Much less is known about the emergence of that ability and its relationship to other meta-representations like Theory of Mind competencies. In the past years, a growing literature has suggested that metacognition and Theory of Mind could theoretically be assumed to belong to the same developmental concept. Since then only a few studies showed empirically evidence that metacognition and Theory of Mind are related. But these studies focused on declarative metacognitive knowledge rather than on procedural metacognitive monitoring like in the present study: N = 159 children were first tested shortly before making the transition to school (aged between 5 1/2 and 7 1/2 years) and one year later at the end of their first grade. Analyses suggest that there is in fact a significant relation between early metacognitive monitoring skills (procedural metacognition) and later Theory of Mind competencies. Notably, language seems to play a crucial role in this relationship. Thus our results bring new insights in the research field of the development of meta-representation and support the view that metacognition and Theory of Mind are indeed interrelated, but the precise mechanisms yet remain unclear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Indigenous media as a phenomenon cannot be reduced to a reaction to western hegemony and colonial legacies, but is often rooted in the context of resistance, empowerment, self-determination and the reclaiming of symbolic representation. Therefore I would like to reflect on different cases of indigenous film and participatory video work in an attempt to highlight the multiple dynamics that arise due to the desideratum of self-representation and to finally locate us as anthropologists in that context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid further development of computed tomography (CT) and magnetic resonance imaging (MRI) induced the idea to use these techniques for postmortem documentation of forensic findings. Until now, only a few institutes of forensic medicine have acquired experience in postmortem cross-sectional imaging. Protocols, image interpretation and visualization have to be adapted to the postmortem conditions. Especially, postmortem alterations, such as putrefaction and livores, different temperature of the corpse and the loss of the circulation are a challenge for the imaging process and interpretation. Advantages of postmortem imaging are the higher exposure and resolution available in CT when there is no concern for biologic effects of ionizing radiation, and the lack of cardiac motion artifacts during scanning. CT and MRI may become useful tools for postmortem documentation in forensic medicine. In Bern, 80 human corpses underwent postmortem imaging by CT and MRI prior to traditional autopsy until the month of August 2003. Here, we describe the imaging appearance of postmortem alterations--internal livores, putrefaction, postmortem clotting--and distinguish them from the forensic findings of the heart, such as calcification, endocarditis, myocardial infarction, myocardial scarring, injury and other morphological alterations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The optimal temporal window of intravenous (IV) computed tomography (CT) cholangiography was prospectively determined. Fifteen volunteers (eight women, seven men; mean age, 38 years) underwent dynamic CT cholangiography. Two unenhanced images were acquired at the porta hepatis. Starting 5 min after initiation of IV contrast infusion (20 ml iodipamide meglumine 52%), 15 pairs of images at 5-min intervals were obtained. Attenuation of the extrahepatic bile duct (EBD) and the liver parenchyma was measured. Two readers graded visualization of the higher-order biliary branches. The first biliary opacification in the EBD occurred between 15 and 25 min (mean, 22.3 min +/- 3.2) after initiation of the contrast agent. Biliary attenuation plateaued between the 35- and the 75-min time points. Maximum hepatic parenchymal enhancement was 18.5 HU +/- 2.7. Twelve subjects demonstrated poor or non-visualization of higher-order biliary branches; three showed good or excellent visualization. Body weight and both biliary attenuation and visualization of the higher-order biliary branches correlated significantly (P<0.05). For peak enhancement of the biliary tree, CT cholangiography should be performed no earlier than 35 min after initiation of IV infusion. For a fixed contrast dose, superior visualization of the biliary system is achieved in subjects with lower body weight.