815 resultados para Haptic rendering
Resumo:
The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.
Resumo:
The role of genetic factors in the pathogenesis of Alzheimer’s disease (AD) is not completely understood. In order to improve this understanding, the cerebral glucose metabolism of seven monozygotic and nine dizygotic twin pairs discordant for AD was compared to that of 13 unrelated controls using positron emission tomography (PET). Traditional region of interest analysis revealed no differences between the non-demented dizygotic co-twins and controls. In contrast, in voxel-level and automated region of interest analyses, the non-demented monozygotic co-twins displayed a lower metabolic rate in temporal and parietal cortices as well as in subcortical grey matter structures when compared to controls. Again, no reductions were seen in the non-demented dizygotic co-twins. The reductions seen in the non-demented monozygotic co-twins may indicate a higher genetically mediated risk of AD or genetically mediated hypometabolism possibly rendering them more vulnerable to AD pathogenesis. With no disease modifying treatment available for AD, prevention of dementia is of the utmost importance. A total of 2 165 at least 65 years old twins of the Finnish Twin Cohort with questionnaire data from 1981 participated in a validated telephone interview assessing cognitive function between 1999 and 2007. Those subjects reporting heavy alcohol drinking in 1981 had an elevated cognitive impairment risk over 20 years later compared to light drinkers. In addition, binge drinking was associated with an increased risk even when total alcohol consumption was controlled for, suggesting that binge drinking is an independent risk factor for cognitive impairment. When compared to light drinkers, also non-drinkers had an increased risk of cognitive impairment. Midlife hypertension, obesity and low leisure time physical activity but not hypercholesterolemia were significant risk factors for cognitive impairment. The accumulation of risk factors increased cognitive impairment risk in an additive manner. A previously postulated dementia risk score based on midlife demographic and cardiovascular factors was validated. The risk score was found to well predict cognitive impairment risk, and cognitive impairment risk increased significantly as the score became higher. However, the risk score is not accurate enough for use in the clinic without further testing.
Resumo:
Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.
Resumo:
PURPOSE: To evaluate changes to the pelvic floor of primiparous women with different delivery modes, using three-dimensional ultrasound. METHODS: A prospective cross-sectional study on 35 primiparae divided into groups according to the delivery mode: elective cesarean delivery (n=10), vaginal delivery (n=16), and forceps delivery (n=9). Three-dimensional ultrasound on the pelvic floor was performed on the second postpartum day with the patient in a resting position. A convex volumetric transducer (RAB4-8L) was used, in contact with the large labia, with the patient in the gynecological position. Biometric measurements of the urogenital hiatus were taken in the axial plane on images in the rendering mode, in order to assess the area, anteroposterior and transverse diameters, average thickness, and avulsion of the levator ani muscle. Differences between groups were evaluated by determining the mean differences and their respective 95% confidence intervals. The proportions of levator ani muscle avulsion were compared between elective cesarean section and vaginal birth using Fisher's exact test. RESULTS: The mean areas of the urogenital hiatus in the cases of vaginal and forceps deliveries were 17.0 and 20.1 cm², respectively, versus 12.4 cm² in the Control Group (elective cesarean). Avulsion of the levator ani muscle was observed in women who underwent vaginal delivery (3/25), however there was no statistically significant difference between cesarean section and vaginal delivery groups (p=0.5). CONCLUSION: Transperineal three-dimensional ultrasound was useful for assessing the pelvic floor of primiparous women, by allowing pelvic morphological changes to be differentiated according to the delivery mode.
Resumo:
This problem of hell is a specific form of the problem of evil that can be expressed in terms of a set of putatively incompatible statements: 1. An omnipotent God could create a world in which all moral agents freely choose life with God. 2. An omnibenevolent God would not create a world with the foreknowledge that some (perhaps a significant proportion) of God’s creatures would end up in hell. 3. An omniscient God would know which people will end up in hell. 4. Some people will end up forever in hell. Since the late twentieth century, a number of British and North American philosophical theologians, inspired by C.S. Lewis, have developed a new approach to answering the problem of hell. Very little work has been done to systematize this category of perspectives on the duration, quality, purpose and finality of hell. Indeed, there is no consensus among scholars as to what such an approach should be called. In this work, however, I call this perspective issuantism. Starting from the works of a wide range of issuantist scholars, I distill what I believe to be the essence of issuantist perspectives on hell: hell is a state that does not result in universal salvation and is characterized by the insistance that both heaven and hell must issue from the love of God, an affirmation of libertarian human freedom and a rejection of retributive interpretations of hell. These sine qua non characteristics form what I have labeled basic issuantism. I proceed to show that basic issuantism by itself does not provide an adequate answer to the problem of hell. The issuantist scholars themselves, however, recognize this weakness and add a wide range of possible supplements to their basic issuantism. Some of these supplemented versions of issuantism succeed in presenting reasonable answers to the problem of hell. One of the key reasons for the development of issuantist views of hell is a perceived failure on the part of conditionalists, universalists and defenders of hell as eternal conscious torment to give adequate answers to the problem of hell. It is my conclusion, however, that with the addition of some of the same supplements, versions of conditionalism and hell as eternal conscious torment can be advanced that succeed just as well in presenting answers to the problem of hell as those advanced by issuantists, thus rendering some of the issuantist critique of non-issuantist perspectives on hell unfounded.
Resumo:
Tässä työssä esiteltiin Android laitteisto- ja sovellusalustana sekä kuvattiin, kuinka Android-pelisovelluksen käyttöliittymä voidaan pitää yhtenäisenä eri näyttölaitteilla skaalauskertoimien ja ankkuroinnin avulla. Toisena osiona työtä käsiteltiin yksinkertaisia tapoja, joilla pelisovelluksien suorituskykyä voidaan parantaa. Näistä tarkempiin mittauksiin valittiin matalatarkkuuksinen piirtopuskuri ja näkymättömissä olevien kappaleiden piilotus. Mittauksissa valitut menetelmät vaikuttivat demosovelluksen suorituskykyyn huomattavasti. Tässä työssä rajauduttiin Android-ohjelmointiin Java-kielellä ilman ulkoisia kirjastoja, jolloin työn tuloksia voi helposti hyödyntää mahdollisimman monessa eri käyttökohteessa.
Resumo:
NifA is the transcriptional activator of the nif genes in Proteobacteria. It is usually regulated by nitrogen and oxygen, allowing biological nitrogen fixation to occur under appropriate conditions. NifA proteins have a typical three-domain structure, including a regulatory N-terminal GAF domain, which is involved in control by fixed nitrogen and not strictly required for activity, a catalytic AAA+ central domain, which catalyzes open complex formation, and a C-terminal domain involved in DNA-binding. In Herbaspirillum seropedicae, a β-proteobacterium capable of colonizing Graminae of agricultural importance, NifA regulation by ammonium involves its N-terminal GAF domain and the signal transduction protein GlnK. When the GAF domain is removed, the protein can still activate nif genes transcription; however, ammonium regulation is lost. In this work, we generated eight constructs resulting in point mutations in H. seropedicae NifA and analyzed their effect on nifH transcription in Escherichia coli and H. seropedicae. Mutations K22V, T160E, M161V, L172R, and A215D resulted in inactive proteins. Mutations Q216I and S220I produced partially active proteins with activity control similar to wild-type NifA. However, mutation G25E, located in the GAF domain, resulted in an active protein that did not require GlnK for activity and was partially sensitive to ammonium. This suggested that G25E may affect the negative interaction between the N-terminal GAF domain and the catalytic central domain under high ammonium concentrations, thus rendering the protein constitutively active, or that G25E could lead to a conformational change comparable with that when GlnK interacts with the GAF domain.
Resumo:
The objective of this study was to obtain babassu coconut milk powder microencapsulated by spray drying process using gum Arabic as wall material. Coconut milk was extracted by babassu peeling, grinding (with two parts of water), and vacuum filtration. The milk was pasteurized at 85 ºC for 15 minutes and homogenized to break up the fat globules, rendering the milk a uniform consistency. A central composite rotatable design with a range of independent variables was used: inlet air temperature in the dryer (170-220 ºC) and gum Arabic concentration (10-20%, w/w) on the responses: moisture content (0.52-2.39%), hygroscopicity (6.98-9.86 g adsorbed water/100g solids), water activity (0.14-0.58), lipid oxidation (0.012-0.064 meq peroxide/kg oil), and process yield (20.33-30.19%). All variables influenced significantly the responses evaluated. Microencapsulation was optimized for maximum process yield and minimal lipid oxidation. The coconut milk powder obtained at optimum conditions was characterized in terms of morphology, particle size distribution, bulk and absolute density, porosity, and wettability.
Resumo:
In this thesis the process of building a software for transport accessibility analysis is described. The goal was to create a software which is easy to distribute and simple to use for the user without particular background in the field of the geographical data analysis. It was shown that existing tools do not suit for this particular task due to complex interface or significant rendering time. The goal was accomplished by applying modern approaches in the process of building web applications such as maps based on vector tiles, FLUX architecture design pattern and module bundling. It was discovered that vector tiles have considerable advantages over image-based tiles such as faster rendering and real-time styling.
Resumo:
Architectural rendering for Moulton Hall, Chapman College, Orange, California. Completed in 1975 (2 floors, 44,592 sq.ft.), this building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre.
Resumo:
Infrared thermography is a non-invasive technique that measures mid to long-wave infrared radiation emanating from all objects and converts this to temperature. As an imaging technique, the value of modern infrared thermography is its ability to produce a digitized image or high speed video rendering a thermal map of the scene in false colour. Since temperature is an important environmental parameter influencing animal physiology and metabolic heat production an energetically expensive process, measuring temperature and energy exchange in animals is critical to understanding physiology, especially under field conditions. As a non-contact approach, infrared thermography provides a non-invasive complement to physiological data gathering. One caveat, however, is that only surface temperatures are measured, which guides much research to those thermal events occurring at the skin and insulating regions of the body. As an imaging technique, infrared thermal imaging is also subject to certain uncertainties that require physical modeling, which is typically done via built-in software approaches. Infrared thermal imaging has enabled different insights into the comparative physiology of phenomena ranging from thermogenesis, peripheral blood flow adjustments, evaporative cooling, and to respiratory physiology. In this review, I provide background and guidelines for the use of thermal imaging, primarily aimed at field physiologists and biologists interested in thermal biology. I also discuss some of the better known approaches and discoveries revealed from using thermal imaging with the objective of encouraging more quantitative assessment.
Resumo:
"Thèse présentée à la Faculté des études supérieures de l'Université de Montréal en vue de l'obtention du grade de Docteur en droit (LL.D.) et à l'Université Panthéon-Assas (Paris II) Droit-économie-Sciences Sociales en vue de l'obtention du grade de Docteur en droit (Arrêté du 30 mars 1992 modifié par l'arrêté du 25 avril 2002)"
Resumo:
La recherche de sources d’énergie fiables ayant un faible coût environnemental est en plein essor. L’hydrogène, étant un transporteur d’énergie propre et simple, pourrait servir comme moyen de transport de l’énergie de l’avenir. Une solution idéale pour les besoins énergétiques implique une production renouvelable de l’hydrogène. Parmi les possibilités pour un tel processus, la production biologique de l’hydrogène, aussi appelée biohydrogène, est une excellente alternative. L’hydrogène est le produit de plusieurs voies métaboliques bactériennes mais le rendement de la conversion de substrat en hydrogène est généralement faible, empêchant ainsi le développement d’un processus pratique de production d’hydrogène. Par exemple, lorsque l’hydrogène est produit par la nitrogénase sous des conditions de photofermentation, chaque molécule d’hydrogène constituée requiert 4 ATP, ce qui rend le processus inefficace. Les bactéries photosynthétiques non sulfureuses ont la capacité de croître sous différentes conditions. Selon des études génomiques, Rhodospirillum rubrum et Rhodopseudomonas palustris possèdent une hydrogénase FeFe qui leur permettrait de produire de l’hydrogène par fermentation anaérobie de manière très efficace. Il existe cependant très peu d’information sur la régulation de la synthèse de cette hydrogénase ainsi que sur les voies de fermentation dont elle fait partie. Une surexpression de cette enzyme permettrait potentiellement d’améliorer le rendement de production d’hydrogène. Cette étude vise à en apprendre davantage sur cette enzyme en tentant la surexpression de cette dernière dans les conditions favorisant la production d’hydrogène. L’utilisation de résidus organiques comme substrat pour la production d’hydrogène sera aussi étudiée.
Resumo:
Les molécules classiques du CMH de classe II présentent des peptides antigéniques aux lymphocytes T CD4+. Cette présentation est régulée par deux molécules non classiques : HLA-DM catalyse la relâche de CLIP et le chargement de peptides et HLA-DO module l’activité de DM. Une expression insuffisante en cellules d’insectes empêche les expériences de cristallisation de DO, probablement en raison de sa conformation, rendant DO instable et inapte à sortir du réticulum endoplasmique (RE). DM corrige la conformation de DO et permet sa sortie du RE. Aussi, par ses ponts disulfures uniques, DM adopte une conformation stable et peut sortir du RE sans lier d’autre molécule. Nous avons tenté de corriger la conformation de DO en introduisant des cystéines pour établir des ponts homologues à ceux de DM. La conformation de DO ne fut pas corrigée. Par ailleurs, nous avons augmenté l’expression de DO en introduisant une séquence partielle de Kozak. Nous avons aussi étudié l’effet de DM sur l’expression de DO. DM a favorisé l’expression de DO, probablement en diminuant sa dégradation. Chaque chaîne du dimère DMαβ est impliquée dans l’oxydation de sa chaîne partenaire. La conformation non-optimale de DO pourrait traduire une incapacité des chaînes α ou β à favoriser l’oxydation de sa partenaire; DM corrigerait ce problème. Notre analyse d’immunobuvardage de type Western a toutefois démontré que DM ne modifie pas l’état d’oxydation de DOα et DOβ. Finalement, nous avons étudié l’interaction DO-DM. L’acide aminé DOαE41 est impliqué dans cette liaison. Certains des acides aminés entre α80 et α84 pourraient être impliqués. Nous avons muté des acides aminés de cette région de DOα. Les résidus testés ne semblent pas impliqués dans la liaison DO-DM. L’obtention de la structure tridimensionnelle de DO et la caractérisation de son état oxydatif et de sa liaison à DM permettront de mieux comprendre son rôle.
Resumo:
Le design d'éclairage est une tâche qui est normalement faite manuellement, où les artistes doivent manipuler les paramètres de plusieurs sources de lumière pour obtenir le résultat désiré. Cette tâche est difficile, car elle n'est pas intuitive. Il existe déjà plusieurs systèmes permettant de dessiner directement sur les objets afin de positionner ou modifier des sources de lumière. Malheureusement, ces systèmes ont plusieurs limitations telles qu'ils ne considèrent que l'illumination locale, la caméra est fixe, etc. Dans ces deux cas, ceci représente une limitation par rapport à l'exactitude ou la versatilité de ces systèmes. L'illumination globale est importante, car elle ajoute énormément au réalisme d'une scène en capturant toutes les interréflexions de la lumière sur les surfaces. Ceci implique que les sources de lumière peuvent avoir de l'influence sur des surfaces qui ne sont pas directement exposées. Dans ce mémoire, on se consacre à un sous-problème du design de l'éclairage: la sélection et la manipulation de l'intensité de sources de lumière. Nous présentons deux systèmes permettant de peindre sur des objets dans une scène 3D des intentions de lumière incidente afin de modifier l'illumination de la surface. De ces coups de pinceau, le système trouve automatiquement les sources de lumière qui devront être modifiées et change leur intensité pour effectuer les changements désirés. La nouveauté repose sur la gestion de l'illumination globale, des surfaces transparentes et des milieux participatifs et sur le fait que la caméra n'est pas fixe. On présente également différentes stratégies de sélection de modifications des sources de lumière. Le premier système utilise une carte d'environnement comme représentation intermédiaire de l'environnement autour des objets. Le deuxième système sauvegarde l'information de l'environnement pour chaque sommet de chaque objet.