944 resultados para 3D model acquisition
Resumo:
9-hydroxystearic acid (9-HSA) is an endogenous lipoperoxidation product and its administration to HT29, a colon adenocarcinoma cell line, induced a proliferative arrest in G0/G1 phase mediated by a direct activation of the p21WAF1 gene, bypassing p53. We have previously shown that 9-HSA controls cell growth and differentiation by inhibiting histone deacetylase 1 (HDAC1) activity, showing interesting features as a new anticancer drug. The interaction of 9-HSA with the catalytic site of the 3D model has been tested with a docking procedure: noticeably, when interacting with the site, the (R)-9-enantiomer is more stable than the (S) one. Thus, in this study, (R)- and (S)-9-HSA were synthesized and their biological activity tested in HT29 cells. At the concentration of 50 M (R)-9-HSA showed a stronger antiproliferative effect than the (S) isomer, as indicated by the growth arrest in G0/G1. The inhibitory effect of (S)-9-HSA on HDAC1, HDAC2 and HDAC3 activity was less effective than that of the (R)-9-HSA in vitro, and the inhibitory activity of both the (R)- and the (S)-9-HSA isomer, was higher on HDAC1 compared to HDAC2 and HDAC3, thus demonstrating the stereospecific and selective interaction of 9-HSA with HDAC1. In addition, histone hyperacetylation caused by 9-HSA treatment was examined by an innovative HPLC/ESI/MS method. Analysis on histones isolated from control and treated HT29 confirmed the higher potency of (R)-9-HSA compared to (S)-9-HSA, severely affecting H2A-2 and H4 acetylation. On the other side, it seemed of interest to determine whether the G0/G1 arrest of HT29 cell proliferation could be bypassed by the stimulation with the growth factor EGF. Our results showed that 9-HSA-treated cells were not only prevented from proliferating, but also showed a decreased [3H]thymidine incorporation after EGF stimulation. In this condition, HT29 cells expressed very low levels of cyclin D1, that didn’t colocalize with HDAC1. These results suggested that the cyclin D1/HDAC1 complex is required for proliferation. Furthermore, in the effort of understanding the possible mechanisms of this effect, we have analyzed the degree of internalization of the EGF/EGFR complex and its interactions with HDAC1. EGF/EGFR/HDAC1 complex quantitatively increases in 9-HSA-treated cells but not in serum starved cells after EGF stimulation. Our data suggested that 9-HSA interaction with the catalytic site of the HDAC1 disrupts the HDAC1/cyclin D1 complex and favors EGF/EGFR recruitment by HDAC1, thus enhancing 9-HSA antiproliferative effects. In conclusion 9-HSA is a promising HDAC inhibitor with high selectivity and specificity, capable of inducing cell cycle arrest and histone hyperacetylation, but also able to modulate HDAC1 protein interaction. All these aspects may contribute to the potency of this new antitumor agent.
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.
Resumo:
Hochreichende Konvektion über Waldbränden ist eine der intensivsten Formen von atmosphärischer Konvektion. Die extreme Wolkendynamik mit hohen vertikalen Windgeschwindigkeiten (bis 20 m/s) bereits an der Wolkenbasis, hohen Wasserdampfübersättigungen (bis 1%) und die durch das Feuer hohen Anzahlkonzentration von Aerosolpartikeln (bis 100000 cm^-3) bilden einen besonderen Rahmen für Aerosol-Wolken Wechselwirkungen.Ein entscheidender Schritt in der mikrophysikalischen Entwicklung einer konvektiven Wolke ist die Aktivierung von Aerosolpartikeln zu Wolkentropfen. Dieser Aktivierungsprozess bestimmt die anfängliche Anzahl und Größe der Wolkentropfen und kann daher die Entwicklung einer konvektiven Wolke und deren Niederschlagsbildung beeinflussen. Die wichtigsten Faktoren, welche die anfängliche Anzahl und Größe der Wolkentropfen bestimmen, sind die Größe und Hygroskopizität der an der Wolkenbasis verfügbaren Aerosolpartikel sowie die vertikale Windgeschwindigkeit. Um den Einfluss dieser Faktoren unter pyro-konvektiven Bedingungen zu untersuchen, wurden numerische Simulationen mit Hilfe eines Wolkenpaketmodells mit detaillierter spektraler Beschreibung der Wolkenmikrophysik durchgeführt. Diese Ergebnisse können in drei unterschiedliche Bereiche abhängig vom Verhältnis zwischen vertikaler Windgeschwindigkeit und Aerosolanzahlkonzentration (w/NCN) eingeteilt werden: (1) ein durch die Aerosolkonzentration limitierter Bereich (hohes w/NCN), (2) ein durch die vertikale Windgeschwindigkeit limitierter Bereich (niedriges w/NCN) und (3) ein Übergangsbereich (mittleres w/NCN). Die Ergebnisse zeigen, dass die Variabilität der anfänglichen Anzahlkonzentration der Wolkentropfen in (pyro-) konvektiven Wolken hauptsächlich durch die Variabilität der vertikalen Windgeschwindigkeit und der Aerosolkonzentration bestimmt wird. rnUm die mikrophysikalischen Prozesse innerhalb der rauchigen Aufwindregion einer pyrokonvektiven Wolke mit einer detaillierten spektralen Mikrophysik zu untersuchen, wurde das Paketmodel entlang einer Trajektorie innerhalb der Aufwindregion initialisiert. Diese Trajektore wurde durch dreidimensionale Simulationen eines pyro-konvektiven Ereignisses durch das Model ATHAM berechnet. Es zeigt sich, dass die Anzahlkonzentration der Wolkentropfen mit steigender Aerosolkonzentration ansteigt. Auf der anderen Seite verringert sich die Größe der Wolkentropfen mit steigender Aerosolkonzentration. Die Reduzierung der Verbreiterung des Tropfenspektrums stimmt mit den Ergebnissen aus Messungen überein und unterstützt das Konzept der Unterdrückung von Niederschlag in stark verschmutzen Wolken.Mit Hilfe des Models ATHAM wurden die dynamischen und mikrophysikalischen Prozesse von pyro-konvektiven Wolken, aufbauend auf einer realistischen Parametrisierung der Aktivierung von Aerosolpartikeln durch die Ergebnisse der Aktivierungsstudie, mit zwei- und dreidimensionalen Simulationen untersucht. Ein modernes zweimomenten mikrophysikalisches Schema wurde in ATHAM implementiert, um den Einfluss der Anzahlkonzentration von Aerosolpartikeln auf die Entwicklung von idealisierten pyro-konvektiven Wolken in US Standardamtosphären für die mittleren Breiten und den Tropen zu untersuchen. Die Ergebnisse zeigen, dass die Anzahlkonzentration der Aerosolpartikel die Bildung von Regen beeinflusst. Für geringe Aerosolkonzentrationen findet die rasche Regenbildung hauptsächlich durch warme mikrophysikalische Prozesse statt. Für höhere Aerosolkonzentrationen ist die Eisphase wichtiger für die Bildung von Regen. Dies führt zu einem verspäteten Einsetzen von Niederschlag für verunreinigtere Atmosphären. Außerdem wird gezeigt, dass die Zusammensetzung der Eisnukleationspartikel (IN) einen starken Einfluss auf die dynamische und mikrophysikalische Struktur solcher Wolken hat. Bei sehr effizienten IN bildet sich Regen früher. Die Untersuchung zum Einfluss des atmosphärischen Hintergrundprofils zeigt eine geringe Auswirkung der Meteorologie auf die Sensitivität der pyro-konvektiven Wolken auf diernAerosolkonzentration. Zum Abschluss wird gezeigt, dass die durch das Feuer emittierte Hitze einen deutlichen Einfluss auf die Entwicklung und die Wolkenobergrenze von pyro-konvektive Wolken hat. Zusammenfassend kann gesagt werden, dass in dieser Dissertation die Mikrophysik von pyrokonvektiven Wolken mit Hilfe von idealisierten Simulation eines Wolkenpaketmodell mit detaillierte spektraler Mikrophysik und eines 3D Modells mit einem zweimomenten Schema im Detail untersucht wurde. Es wird gezeigt, dass die extremen Bedingungen im Bezug auf die vertikale Windgeschwindigkeiten und Aerosolkonzentrationen einen deutlichen Einfluss auf die Entwicklung von pyro-konvektiven Wolken haben.
Resumo:
Throughout this research, the whole life cycle of a building will be analyzed, with a special focus on the most common issues that affect the construction sector nowadays, such as safety. In fact, the goal is to enhance the management of the entire construction process in order to reduce the risk of accidents. The contemporary trend is that of researching new tools capable of reducing, or even eliminating, the most common mistakes that usually lead to safety risks. That is one of the main reasons why new technologies and tools have been introduced in the field. The one we will focus on is the so-called BIM: Building Information Modeling. With the term BIM we refer to wider and more complex analysis tool than a simple 3D modeling software. Through BIM technologies we are able to generate a multi-dimension 3D model which contains all the information about the project. This innovative approach aims at a better understanding and control of the project by taking into consideration the entire life cycle and resulting in a faster and more sustainable way of management. Furthermore, BIM software allows for the sharing of all the information among the different aspects of the project and among the different participants involved thus improving the cooperation and communication. In addition, BIM software utilizes smart tools that simulate and visualize the process in advance, thus preventing issues that might not have been taking into consideration during the design process. This leads to higher chances of avoiding risks, delays and cost increases. Using a hospital case study, we will apply this approach for the completion of a safety plan, with a special focus onto the construction phase.
Resumo:
GARP (Glycoprotein A Repetitions Predominant) ist ein Oberflächenrezeptor auf regulatorischen T–Zellen (TRegs), der den latenten TGF–β (Transforming Growth Factor–β) bindet. Ein Funktionsverlust von T Regs hat gravierende Autoimmunerkrankungen wie das Immunodysregulation Polyendocrinopathy Enteropathy X–linked Syndrome (IPEX), Multiple Sklerose (MS) oder Rheumatoide Arthritis (RA) zur Folge. GARP stellt über eine Erhöhung der Aktivierbarkeit von TGF–β den regulatorischen Phänotyp von TRegs sicher und inhibiert die Ausbreitung von autoreaktiven TH17 Zellen.rn In dieser Arbeit stand die Regulation von GARP selbst im Mittelpunkt. Es konnte gezeigt werden, dass es sich innerhalb der kiefertragenden Vertebraten um ein strikt konserviertes Protein handelt. Datenbankanalysen machten deutlich, dass es zuerst in basalen Knochenfischen zusammen mit anderen Komponenten der adaptiven Immunantwort auftritt. Ein 3D–Modell, welches über Homologiemodellierung erstellt wurde, gab Aufschluss über die Struktur des Rezeptors und mögliche intramolekulare Disulfidbrücken. Für in vitro Versuche wurde eine lösliche Variante von GARP durch einen Austausch der Transmembrandomäne durch C–terminale Meprin α Domänen konstruiert. Diese Variante wurde in der eukaryotischen Zellkultur zuverlässig in den Überstand sezerniert und konnte chromatografisch gereinigt werden. Mit diesem rekombinanten GARP wurden Prozessierungsversuche mit Autoimmunpathogenese assoziierten Proteasen durchgeführt. Dabei zeigte sich, dass die Serinproteasen Trypsin, Neutrophile Elastase und Plasmin, sowie die Metalloprotease MMP2 in der Lage sind, GARP vollständig zu degradieren. In TGF–β sensitiven Proliferationsuntersuchungen stellte sich heraus, dass die entstandenen Fragmente immer noch in der Lage waren die Aktivierbarkeit von TGF–β zu erhöhen. Neben der Degradierung durch die oben genannten Proteasen konnte ebenfalls beobachtet werden, dass MMP9 und Ovastacin in der Lage sind GARP spezifisch zu schneiden. Ovastacin mRNA wurde in dieser Arbeit das erste Mal außerhalb der Oocyte, in T–Zellen beschrieben. Mit GARP wurde zudem das zweite Proteinsubstrat, neben dem Zona Pellucida Protein 2 identifiziert. Das durch MMP9 erzeugte N–terminale Fragment besitzt zwar die Eigenschaft, an TGF–β zu binden, kann aber die Aktivierbarkeit von TGF–β nicht mehr wie das intakte GARP erleichtern. rnDiese Arbeit zeigte, dass GARP durch Proteolyse reguliert wird, wobei die entstehenden Fragmente unterschiedlichen Einfluss auf die Aktivierbarkeit von TGF–β haben. Dieses Wissen bildet die Grundlage für weitere Untersuchungen im translationalen Forschungsbereich, um die gewonnenen Erkenntnisse zur Immunmodulation in der Therapie verschiedener Krankheiten einsetzen zu können.rn
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.
Resumo:
Repetitive proteins (RP) of Trypanosoma cruzi are highly present in the parasite and are strongly recognized by sera from Chagas' disease patients. Flagelar Repetitive Antigen (FRA), which is expressed in all steps of the parasite life cycle, is the RP that displays the greatest number of aminoacids per repeat and has been indicated as one of the most suitable candidate for diagnostic test because of its high performance in immunoassays. Here we analyzed the influence of the number of repeats on the immunogenic and antigenic properties of the antigen. Recombinant proteins containing one, two, and four tandem repeats of FRA (FRA1, FRA2, and FRA4, respectively) were obtained and the immune response induced by an equal amount of repeats was evaluated in a mouse model. The reactivity of specific antibodies present in sera from patients naturally infected with T. cruzi was also assessed against FRA1, FRA2, and FRA4 proteins, and the relative avidity was analyzed. We determined that the number of repeats did not increase the humoral response against the antigen and this result was reproduced when the repeated motifs were alone or fused to a non-repetitive protein. By contrast, the binding affinity of specific human antibodies increases with the number of repeated motifs in FRA antigen. We then concluded that the high ability of FRA to be recognized by specific antibodies from infected individuals is mainly due to a favorable polyvalent interaction between the antigen and the antibodies. In accordance with experimental results, a 3D model was proposed and B epitope in FRA1, FRA2, and FRA4 were predicted.
Resumo:
Information management is a key aspect of successful construction projects. Having inaccurate measurements and conflicting data can lead to costly mistakes, and vague quantities can ruin estimates and schedules. Building information modeling (BIM) augments a 3D model with a wide variety of information, which reduces many sources of error and can detect conflicts before they occur. Because new technology is often more complex, it can be difficult to effectively integrate it with existing business practices. In this paper, we will answer two questions: How can BIM add value to construction projects? and What lessons can be learned from other companies that use BIM or other similar technology? Previous research focused on the technology as if it were simply a tool, observing problems that occurred while integrating new technology into existing practices. Our research instead looks at the flow of information through a company and its network, seeing all the actors as part of an ecosystem. Building upon this idea, we proposed the metaphor of an information supply chain to illustrate how BIM can add value to a construction project. This paper then concludes with two case studies. The first case study illustrates a failure in the flow of information that could have prevented by using BIM. The second case study profiles a leading design firm that has used BIM products for many years and shows the real benefits of using this program.
Resumo:
Scaphoid is one of the 8 carpal bones found adjacent to the thumb supported proximally by Radius bone. During the free fall, on outstretched hand, the impact load gets transferred to the scaphoid at its free anterior end. Unique arrangement of other carpal bones in the palm is also one of the reasons for the load to get transferred to scaphoid. About half of the total load acting upon carpal bone gets transferred to scaphoid at its distal pole. There are about 10 to 12 clinically observed fracture pattern in the scaphoid due to free fall. The aim of the study is to determine the orientation of the load, magnitude of the load and the corresponding fracture pattern. This study includes both static and dynamic finite element models validated by experiments. The scaphoid model has been prepared from CT scans of a 27 year old person. The 2D slices of the CT scans have been converted to 3D model by using MIMICS software. There are four cases of loading studied which are considered to occur clinically more frequently. In case (i) the load is applied at the posterior end at distal pole whereas in case (ii), (iii) and (iv), the load is applied at anterior end at different directions. The model is given a fixed boundary condition at the region which is supported by Radius bone during the impact. Same loading and boundary conditions have been used in both static and dynamic explicit finite element analysis. The site of fracture initiation and path of fracture propagation have been identified by using max principal stress / gradient and max principal strain / gradient criterion respectively in static and dynamic explicit finite element analysis. Static and dynamic impact experiments were performed on the polyurethane foam specimens to validate the finite element results. Experimental results such as load at fracture, site of fracture initiation and path of fracture propagation have been compared with the results of finite element analysis. Four different types of fracture patterns observed in clinical studies have been identified in this study.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.
Resumo:
The aim of this study was to evaluate whether measurements performed on conventional frontal radiographs are comparable to measurements performed on three-dimensional (3D) models of human skulls derived from cone beam computed tomography (CBCT) scans and if the latter can be used in longitudinal studies. Cone beam computed tomography scans and conventional frontal cephalometric radiographs were made of 40 dry human skulls. From the CBCT scan a 3D model was constructed. Standard cephalometric software was used to identify landmarks and to calculate ratios and angles. The same operator identified 10 landmarks on both types of cephalometric radiographs, and on all images, five times with a time interval of 1 wk. Intra-observer reliability was acceptable for all measurements. There was a statistically significant and clinically relevant difference between measurements performed on conventional frontal radiographs and on 3D CBCT-derived models of the same skull. There was a clinically relevant difference between angular measurements performed on conventional frontal cephalometric radiographs, compared with measurements performed on 3D models constructed from CBCT scans. We therefore recommend that 3D models should not be used for longitudinal research in cases where there are only two-dimensional (2D) records from the past.
Resumo:
We present a new approach to diffuse reflectance estimation for dynamic scenes. Non-parametric image statistics are used to transfer reflectance properties from a static example set to a dynamic image sequence. The approach allows diffuse reflectance estimation for surface materials with inhomogeneous appearance, such as those which commonly occur with patterned or textured clothing. Material editing is also possible by transferring edited reflectance properties. Material reflectance properties are initially estimated from static images of the subject under multiple directional illuminations using photometric stereo. The estimated reflectance together with the corresponding image under uniform ambient illumination form a prior set of reference material observations. Material reflectance properties are then estimated for video sequences of a moving person captured under uniform ambient illumination by matching the observed local image statistics to the reference observations. Results demonstrate that the transfer of reflectance properties enables estimation of the dynamic surface normals and subsequent relighting combined with material editing. This approach overcomes limitations of previous work on material transfer and relighting of dynamic scenes which was limited to surfaces with regions of homogeneous reflectance. We evaluate our approach for relighting 3D model sequences reconstructed from multiple view video. Comparison to previous model relighting demonstrates improved reproduction of detailed texture and shape dynamics.
Resumo:
BACKGROUND: Excessive and abnormal accumulation of alpha-synuclein (α-synuclein) is a factor contributing to pathogenic cell death in Parkinson's disease. The purpose of this study, based on earlier observations of Parkinson's disease cerebrospinal fluid (PD-CSF) initiated cell death, was to determine the effects of CSF from PD patients on the functionally different microglia and astrocyte glial cell lines. Microglia cells from human glioblastoma and astrocytes from fetal brain tissue were cultured, grown to confluence, treated with fixed concentrations of PD-CSF, non-PD disease control CSF, or control no-CSF medium, then photographed and fluorescently probed for α-synuclein content by deconvolution fluorescence microscopy. Outcome measures included manually counted cell growth patterns from day 1-8; α-synuclein density and distribution by antibody tagged 3D model stacked deconvoluted fluorescent imaging. RESULTS: After PD-CSF treatment, microglia growth was reduced extensively, and a non-confluent pattern with morphological changes developed, that was not evident in disease control CSF and no-CSF treated cultures. Astrocyte growth rates were similarly reduced by exposure to PD-CSF, but morphological changes were not consistently noted. PD-CSF treated microglia showed a significant increase in α-synuclein content by day 4 compared to other treatments (p ≤ 0.02). In microglia only, α-synuclein aggregated and redistributed to peri-nuclear locations. CONCLUSIONS: Cultured microglia and astrocytes are differentially affected by PD-CSF exposure compared to non-PD-CSF controls. PD-CSF dramatically impacts microglia cell growth, morphology, and α-synuclein deposition compared to astrocytes, supporting the hypothesis of cell specific susceptibility to PD-CSF toxicity.
Resumo:
There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.