978 resultados para virtual models
Resumo:
11beta-Hydroxysteroid dehydrogenase (11beta-HSD) enzymes catalyze the conversion of biologically inactive 11-ketosteroids into their active 11beta-hydroxy derivatives and vice versa. Inhibition of 11beta-HSD1 has considerable therapeutic potential for glucocorticoid-associated diseases including obesity, diabetes, wound healing, and muscle atrophy. Because inhibition of related enzymes such as 11beta-HSD2 and 17beta-HSDs causes sodium retention and hypertension or interferes with sex steroid hormone metabolism, respectively, highly selective 11beta-HSD1 inhibitors are required for successful therapy. Here, we employed the software package Catalyst to develop ligand-based multifeature pharmacophore models for 11beta-HSD1 inhibitors. Virtual screening experiments and subsequent in vitro evaluation of promising hits revealed several selective inhibitors. Efficient inhibition of recombinant human 11beta-HSD1 in intact transfected cells as well as endogenous enzyme in mouse 3T3-L1 adipocytes and C2C12 myotubes was demonstrated for compound 27, which was able to block subsequent cortisol-dependent activation of glucocorticoid receptors with only minor direct effects on the receptor itself. Our results suggest that inhibitor-based pharmacophore models for 11beta-HSD1 in combination with suitable cell-based activity assays, including such for related enzymes, can be used for the identification of selective and potent inhibitors.
Resumo:
This article presents a feasibility study with the objective of investigating the potential of multi-detector computed tomography (MDCT) to estimate the bone age and sex of deceased persons. To obtain virtual skeletons, the bodies of 22 deceased persons with known age at death were scanned by MDCT using a special protocol that consisted of high-resolution imaging of the skull, shoulder girdle (including the upper half of the humeri), the symphysis pubis and the upper halves of the femora. Bone and soft-tissue reconstructions were performed in two and three dimensions. The resulting data were investigated by three anthropologists with different professional experience. Sex was determined by investigating three-dimensional models of the skull and pelvis. As a basic orientation for the age estimation, the complex method according to Nemeskéri and co-workers was applied. The final estimation was effected using additional parameters like the state of dentition, degeneration of the spine, etc., which where chosen individually by the three observers according to their experience. The results of the study show that the estimation of sex and age is possible by the use of MDCT. Virtual skeletons present an ideal collection for anthropological studies, because they are obtained in a non-invasive way and can be investigated ad infinitum.
Resumo:
Cannabinoid receptor 2 (CB(2) receptor) ligands are potential candidates for the therapy of chronic pain, inflammatory disorders, atherosclerosis, and osteoporosis. We describe the development of pharmacophore models for CB(2) receptor ligands, as well as a pharmacophore-based virtual screening workflow, which resulted in 14 hits for experimental follow-up. Seven compounds were identified with K(i) values below 25 microM. The CB(2) receptor-selective pyridine tetrahydrocannabinol analogue 8 (K(i) = 1.78 microM) was identified as a CB(2) partial agonist. Acetamides 12 (K(i) = 1.35 microM) and 18 (K(i) = 2.1 microM) represent new scaffolds for CB(2) receptor-selective antagonists and inverse agonists, respectively. Overall, our pharmacophore-based workflow yielded three novel scaffolds for the chemical development of CB(2) receptor ligands.
Resumo:
Virtual machines emulating hardware devices are generally implemented in low-level languages and using a low-level style for performance reasons. This trend results in largely difficult to understand, difficult to extend and unmaintainable systems. As new general techniques for virtual machines arise, it gets harder to incorporate or test these techniques because of early design and optimization decisions. In this paper we show how such decisions can be postponed to later phases by separating virtual machine implementation issues from the high-level machine-specific model. We construct compact models of whole-system VMs in a high-level language, which exclude all low-level implementation details. We use the pluggable translation toolchain PyPy to translate those models to executables. During the translation process, the toolchain reintroduces the VM implementation and optimization details for specific target platforms. As a case study we implement an executable model of a hardware gaming device. We show that our approach to VM building increases understandability, maintainability and extendability while preserving performance.
Resumo:
In this paper we present a model-based approach for real-time camera pose estimation in industrial scenarios. The line model which is used for tracking is generated by rendering a polygonal model and extracting contours out of the rendered scene. By un-projecting a point on the contour with the depth value stored in the z-buffer, the 3D coordinates of the contour can be calculated. For establishing 2D/3D correspondences the 3D control points on the contour are projected into the image and a perpendicular search for gradient maxima for every point on the contour is performed. Multiple hypotheses of 2D image points corresponding to a 3D control point make the pose estimation robust against ambiguous edges in the image.
Resumo:
We consider the problem of approximating the 3D scan of a real object through an affine combination of examples. Common approaches depend either on the explicit estimation of point-to-point correspondences or on 2-dimensional projections of the target mesh; both present drawbacks. We follow an approach similar to [IF03] by representing the target via an implicit function, whose values at the vertices of the approximation are used to define a robust cost function. The problem is approached in two steps, by approximating first a coarse implicit representation of the whole target, and then finer, local ones; the local approximations are then merged together with a Poisson-based method. We report the results of applying our method on a subset of 3D scans from the Face Recognition Grand Challenge v.1.0.
Resumo:
Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.
Resumo:
Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.
Resumo:
The prognosis for lung cancer patients remains poor. Five year survival rates have been reported to be 15%. Studies have shown that dose escalation to the tumor can lead to better local control and subsequently better overall survival. However, dose to lung tumor is limited by normal tissue toxicity. The most prevalent thoracic toxicity is radiation pneumonitis. In order to determine a safe dose that can be delivered to the healthy lung, researchers have turned to mathematical models predicting the rate of radiation pneumonitis. However, these models rely on simple metrics based on the dose-volume histogram and are not yet accurate enough to be used for dose escalation trials. The purpose of this work was to improve the fit of predictive risk models for radiation pneumonitis and to show the dosimetric benefit of using the models to guide patient treatment planning. The study was divided into 3 specific aims. The first two specifics aims were focused on improving the fit of the predictive model. In Specific Aim 1 we incorporated information about the spatial location of the lung dose distribution into a predictive model. In Specific Aim 2 we incorporated ventilation-based functional information into a predictive pneumonitis model. In the third specific aim a proof of principle virtual simulation was performed where a model-determined limit was used to scale the prescription dose. The data showed that for our patient cohort, the fit of the model to the data was not improved by incorporating spatial information. Although we were not able to achieve a significant improvement in model fit using pre-treatment ventilation, we show some promising results indicating that ventilation imaging can provide useful information about lung function in lung cancer patients. The virtual simulation trial demonstrated that using a personalized lung dose limit derived from a predictive model will result in a different prescription than what was achieved with the clinically used plan; thus demonstrating the utility of a normal tissue toxicity model in personalizing the prescription dose.
Resumo:
Virtual worlds have moved from being a geek topic to one of mainstream academic interest. This transition is contingent not only on the augmented economic, societal and cultural value of these virtual realities and their effect upon real life but also on their convenience as fields for experimentation, for testing models and paradigms. User creation is however not something that has been transplanted from the real to the virtual world but a phenomenon and a dynamic process that happens from within and is defined through complex relationships between commercial and non-commercial, commodified and not commodified, individual and of the community, amateur and professional, art and not art. Accounting for this complex environment, the present paper explores user created content in virtual worlds, its dimensions and value and above all, its constraints by code and law. It puts forward suggestions for better understanding and harnessing this creativity.
Resumo:
In this review, the neural underpinnings of the experience of presence are outlined. Firstly, it is shown that presence is associated with activation of a distributed network, which includes the dorsal and ventral visual stream, the parietal cortex, the premotor cortex, mesial temporal areas, the brainstem and the thalamus. Secondly, the dorsolateral prefrontal cortex (DLPFC) is identified as a key node of the network as it modulates the activity of the network and the associated experience of presence. Thirdly, children lack the strong modulatory influence of the DLPFC on the network due to their unmatured frontal cortex. Fourthly, it is shown that presence-related measures are influenced by manipulating the activation in the DLPFC using transcranial direct current stimulation (tDCS) while participants are exposed to the virtual roller coaster ride. Finally, the findings are discussed in the context of current models explaining the experience of presence, the rubber hand illusion, and out-of-body experiences.
Resumo:
The ability to view and interact with 3D models has been happening for a long time. However, vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges. Hand-held mobile devices have changed the way we interact with virtual reality environments. Their high mobility and technical features, such as inertial sensors, cameras and fast processors, are especially attractive for advancing the state of the art in virtual reality systems. Also, their ubiquity and fast Internet connection open a path to distributed and collaborative development. However, such path has not been fully explored in many domains. VR systems for real world engineering contexts are still difficult to use, especially when geographically dispersed engineering teams need to collaboratively visualize and review 3D CAD models. Another challenge is the ability to rendering these environments at the required interactive rates and with high fidelity. In this document it is presented a virtual reality system mobile for visualization, navigation and reviewing large scale 3D CAD models, held under the CEDAR (Collaborative Engineering Design and Review) project. It’s focused on interaction using different navigation modes. The system uses the mobile device's inertial sensors and camera to allow users to navigate through large scale models. IT professionals, architects, civil engineers and oil industry experts were involved in a qualitative assessment of the CEDAR system, in the form of direct user interaction with the prototypes and audio-recorded interviews about the prototypes. The lessons learned are valuable and are presented on this document. Subsequently it was prepared a quantitative study on the different navigation modes to analyze the best mode to use it in a given situation.
Resumo:
DynaLearn (http://www.DynaLearn.eu) develops a cognitive artefact that engages learners in an active learning by modelling process to develop conceptual system knowledge. Learners create external representations using diagrams. The diagrams capture conceptual knowledge using the Garp3 Qualitative Reasoning (QR) formalism [2]. The expressions can be simulated, confronting learners with the logical consequences thereof. To further aid learners, DynaLearn employs a sequence of knowledge representations (Learning Spaces, LS), with increasing complexity in terms of the modelling ingredients a learner can use [1]. An online repository contains QR models created by experts/teachers and learners. The server runs semantic services [4] to generate feedback at the request of learners via the workbench. The feedback is communicated to the learner via a set of virtual characters, each having its own competence [3]. A specific feedback thus incorporates three aspects: content, character appearance, and a didactic setting (e.g. Quiz mode). In the interactive event we will demonstrate the latest achievements of the DynaLearn project. First, the 6 learning spaces for learners to work with. Second, the generation of feedback relevant to the individual needs of a learner using Semantic Web technology. Third, the verbalization of the feedback via different animated virtual characters, notably: Basic help, Critic, Recommender, Quizmaster & Teachable agen
Resumo:
The educational platform Virtual Science Hub (ViSH) has been developed as part of the GLOBAL excursion European project. ViSH (http://vishub.org/) is a portal where teachers and scientist interact to create virtual excursions to science infrastructures. The main motivation behind the project was to connect teachers - and in consequence their students - to scientific institutions and their wide amount of infrastructures and resources they are working with. Thus the idea of a hub was born that would allow the two worlds of scientists and teachers to connect and to innovate science teaching. The core of the ViSH?s concept design is based on virtual excursions, which allow for a number of pedagogical models to be applied. According to our internal definition a virtual excursion is a tour through some digital context by teachers and pupils on a given topic that is attractive and has an educational purpose. Inquiry-based learning, project-based and problem-based learning are the most prominent approaches that a virtual excursion may serve. The domain specific resources and scientific infrastructures currently available on the ViSH are focusing on life sciences, nano-technology, biotechnology, grid and volunteer computing. The virtual excursion approach allows an easy combination of these resources into interdisciplinary teaching scenarios. In addition, social networking features support the users in collaborating and communicating in relation to these excursions and thus create a community of interest for innovative science teaching. The design and development phases were performed following a participatory design approach. An important aspect in this process was to create design partnerships amongst all actors involved, researchers, developers, infrastructure providers, teachers, social scientists, and pedagogical experts early in the project. A joint sense of ownership was created and important changes during the conceptual phase were implemented in the ViSH due to early user feedback. Technology-wise the ViSH is based on the latest web technologies in order to make it cross-platform compatible so that it works on several operative systems such as Windows, Mac or Linux and multi-device accessible, such as desktop, tablet and mobile devices. The platform has been developed in HTML5, the latest standard for web development, assuring that it can run on any modern browser. In addition to social networking features a core element on the ViSH is the virtual excursions editor. It is a web tool that allows teachers and scientists to create rich mash-ups of learning resources provided by the e-Infrastructures (i.e. remote laboratories and live webcams). These rich mash-ups can be presented in either slides or flashcards format. Taking advantage of the web architecture supported, additional powerful components have been integrated like a recommendation engine to provide personalized suggestions about educational content or interesting users and a videoconference tool to enhance real-time collaboration like MashMeTV (http://www.mashme.tv/).
Resumo:
There is an increasing need of easy and affordable technologies to automatically generate virtual 3D models from their real counterparts. In particular, 3D human reconstruction has driven the creation of many clever techniques, most of them based on the visual hull (VH) concept. Such techniques do not require expensive hardware; however, they tend to yield 3D humanoids with realistic bodies but mediocre faces, since VH cannot handle concavities. On the other hand, structured light projectors allow to capture very accurate depth data, and thus to reconstruct realistic faces, but they are too expensive to use several of them. We have developed a technique to merge a VH-based 3D mesh of a reconstructed humanoid and the depth data of its face, captured by a single structured light projector. By combining the advantages of both systems in a simple setting, we are able to reconstruct realistic 3D human models with believable faces.