914 resultados para 3D model
Resumo:
The building budgeting quickly and accurately is a challenge faced by the companies in the sector. The cost estimation process is performed from the quantity takeoff and this process of quantification, historically, through the analysis of the project, scope of work and project information contained in 2D design, text files and spreadsheets. This method, in many cases, present itself flawed, influencing the making management decisions, once it is closely coupled to time and cost management. In this scenario, this work intends to make a critical analysis of conventional process of quantity takeoff, from the quantification through 2D designs, and with the use of the software Autodesk Revit 2016, which uses the concepts of building information modeling for automated quantity takeoff of 3D model construction. It is noted that the 3D modeling process should be aligned with the goals of budgeting. The use of BIM technology programs provides several benefits compared to traditional quantity takeoff process, representing gains in productivity, transparency and assertiveness
Resumo:
When there is a failure on the external sheath of a flexible pipe, a high value of hydrostatic pressure is transferred to its internal plastic layer and consequently to its interlocked carcass, leading to the possibility of collapse. The design of a flexible pipe must predict the maximum value of external pressure the carcass layer can be subjected to without collapse. This value depends on the initial ovalization due to manufacturing tolerances. To study that problem, two numerical finite element models were developed to simulate the behavior of the carcass subjected to external pressure, including the plastic behavior of the materials. The first one is a full 3D model and the second one is a 3D ring model, both composed by solid elements. An interesting conclusion is that both the models provide the same results. An analytical model using an equivalent thickness approach for the carcass layer was also constructed. A good correlation between analytical and numerical models was achieved for pre-collapse behavior but the collapse pressure value and post-collapse behavior were not well predicted by the analytical model. [DOI: 10.1115/1.4005185]
Resumo:
Induction of apoptotic cell death in response to chemotherapy and other external stimuli has proved extremely difficult in melanoma, leading to tumor progression, metastasis formation and resistance to therapy. A promising approach for cancer chemotherapy is the inhibition of proteasomal activity, as the half-life of the majority of cellular proteins is under proteasomal control and inhibitors have been shown to induce cell death programs in a wide variety of tumor cell types. 4-Nerolidylcatechol (4-NC) is a potent antioxidant whose cytotoxic potential has already been demonstrated in melanoma tumor cell lines. Furthermore, 4-NC was able to induce the accumulation of ubiquitinated proteins, including classic targets of this process such as Mcl-1. As shown for other proteasomal inhibitors in melanoma, the cytotoxic action of 4-NC is time-dependent upon the pro-apoptotic protein Noxa, which is able to bind and neutralize Mcl-1. We demonstrate the role of 4-NC as a potent inducer of ROS and p53. The use of an artificial skin model containing melanoma also provided evidence that 4-NC prevented melanoma proliferation in a 3D model that more closely resembles normal human skin.
Resumo:
9-hydroxystearic acid (9-HSA) is an endogenous lipoperoxidation product and its administration to HT29, a colon adenocarcinoma cell line, induced a proliferative arrest in G0/G1 phase mediated by a direct activation of the p21WAF1 gene, bypassing p53. We have previously shown that 9-HSA controls cell growth and differentiation by inhibiting histone deacetylase 1 (HDAC1) activity, showing interesting features as a new anticancer drug. The interaction of 9-HSA with the catalytic site of the 3D model has been tested with a docking procedure: noticeably, when interacting with the site, the (R)-9-enantiomer is more stable than the (S) one. Thus, in this study, (R)- and (S)-9-HSA were synthesized and their biological activity tested in HT29 cells. At the concentration of 50 M (R)-9-HSA showed a stronger antiproliferative effect than the (S) isomer, as indicated by the growth arrest in G0/G1. The inhibitory effect of (S)-9-HSA on HDAC1, HDAC2 and HDAC3 activity was less effective than that of the (R)-9-HSA in vitro, and the inhibitory activity of both the (R)- and the (S)-9-HSA isomer, was higher on HDAC1 compared to HDAC2 and HDAC3, thus demonstrating the stereospecific and selective interaction of 9-HSA with HDAC1. In addition, histone hyperacetylation caused by 9-HSA treatment was examined by an innovative HPLC/ESI/MS method. Analysis on histones isolated from control and treated HT29 confirmed the higher potency of (R)-9-HSA compared to (S)-9-HSA, severely affecting H2A-2 and H4 acetylation. On the other side, it seemed of interest to determine whether the G0/G1 arrest of HT29 cell proliferation could be bypassed by the stimulation with the growth factor EGF. Our results showed that 9-HSA-treated cells were not only prevented from proliferating, but also showed a decreased [3H]thymidine incorporation after EGF stimulation. In this condition, HT29 cells expressed very low levels of cyclin D1, that didn’t colocalize with HDAC1. These results suggested that the cyclin D1/HDAC1 complex is required for proliferation. Furthermore, in the effort of understanding the possible mechanisms of this effect, we have analyzed the degree of internalization of the EGF/EGFR complex and its interactions with HDAC1. EGF/EGFR/HDAC1 complex quantitatively increases in 9-HSA-treated cells but not in serum starved cells after EGF stimulation. Our data suggested that 9-HSA interaction with the catalytic site of the HDAC1 disrupts the HDAC1/cyclin D1 complex and favors EGF/EGFR recruitment by HDAC1, thus enhancing 9-HSA antiproliferative effects. In conclusion 9-HSA is a promising HDAC inhibitor with high selectivity and specificity, capable of inducing cell cycle arrest and histone hyperacetylation, but also able to modulate HDAC1 protein interaction. All these aspects may contribute to the potency of this new antitumor agent.
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
“Cartographic heritage” is different from “cartographic history”. The second term refers to the study of the development of surveying and drawing techniques related to maps, through time, i.e. through different types of cultural environment which were background for the creation of maps. The first term concerns the whole amount of ancient maps, together with these different types of cultural environment, which the history has brought us and which we perceive as cultural values to be preserved and made available to many users (public, institutions, experts). Unfortunately, ancient maps often suffer preservation problems of their analog support, mostly due to aging. Today, metric recovery in digital form and digital processing of historical cartography allow preserving map heritage. Moreover, modern geomatic techniques give us new chances of using historical information, which would be unachievable on analog supports. In this PhD thesis, the whole digital processing of recovery and elaboration of ancient cartography is reported, with a special emphasis on the use of digital tools in preservation and elaboration of cartographic heritage. It is possible to divide the workflow into three main steps, that reflect the chapter structure of the thesis itself: • map acquisition: conversion of the ancient map support from analog to digital, by means of high resolution scanning or 3D surveying (digital photogrammetry or laser scanning techniques); this process must be performed carefully, with special instruments, in order to reduce deformation as much as possible; • map georeferencing: reproducing in the digital image the native metric content of the map, or even improving it by selecting a large number of still existing ground control points; this way it is possible to understand the projection features of the historical map, as well as to evaluate and represent the degree of deformation induced by the old type of cartographic transformation (that can be unknown to us), by surveying errors or by support deformation, usually all errors of too high value with respect to our standards; • data elaboration and management in a digital environment, by means of modern software tools: vectorization, giving the map a new and more attractive graphic view (for instance, by creating a 3D model), superimposing it on current base maps, comparing it to other maps, and finally inserting it in GIS or WebGIS environment as a specific layer. The study is supported by some case histories, each of them interesting from the point of view of one digital cartographic elaboration step at least. The ancient maps taken into account are the following ones: • three maps of the Po river delta, made at the end of the XVI century by a famous land-surveyor, Ottavio Fabri (he is single author in the first map, co-author with Gerolamo Pontara in the second map, co-author with Bonajuto Lorini and others in the third map), who wrote a methodological textbook where he explains a new topographical instrument, the squadra mobile (mobile square) invented and used by himself; today all maps are preserved in the State Archive of Venice; • the Ichnoscenografia of Bologna by Filippo de’ Gnudi, made in the 1702 and today preserved in the Archiginnasio Library of Bologna; it is a scenographic view of the city, captured in a bird’s eye flight, but also with an icnographic value, as the author himself declares; • the map of Bologna by the periti Gregorio Monari and Antonio Laghi, the first map of the city derived from a systematic survey, even though it was made only ten years later (1711–1712) than the map by de’ Gnudi; in this map the scenographic view was abandoned, in favor of a more correct representation by means of orthogonal projection; today the map is preserved in the State Archive of Bologna; • the Gregorian Cadastre of Bologna, made in 1831 and updated until 1927, now preserved in the State Archive of Bologna; it is composed by 140 maps and 12 brogliardi (register volumes). In particular, the three maps of the Po river delta and the Cadastre were studied with respect to their acquisition procedure. Moreover, the first maps were analyzed from the georeferencing point of view, and the Cadastre was analyzed with respect to a possible GIS insertion. Finally, the Ichnoscenografia was used to illustrate a possible application of digital elaboration, such as 3D modeling. Last but not least, we must not forget that the study of an ancient map should start, whenever possible, from the consultation of the precious original analogical document; analysis by means of current digital techniques allow us new research opportunities in a rich and modern multidisciplinary context.
Resumo:
The present study concerns the acoustical characterisation of Italian historical theatres. It moved from the ISO 3382 which provides the guidelines for the measurement of a well established set of room acoustic parameters inside performance spaces. Nevertheless, the peculiarity of Italian historical theatres needs a more specific approach. The Charter of Ferrara goes in this direction, aiming at qualifying the sound field in this kind of halls and the present work pursues the way forward. Trying to understand how the acoustical qualification should be done, the Bonci Theatre in Cesena has been taken as a case study. In September 2012 acoustical measurements were carried out in the theatre, recording monaural e binaural impulse responses at each seat in the hall. The values of the time criteria, energy criteria and psycho-acoustical and spatial criteria have been extracted according to ISO 3382. Statistics were performed and a 3D model of the theatre was realised and tuned. Statistical investigations were carried out on the whole set of measurement positions and on carefully chosen reduced subsets; it turned out that these subsets are representative only of the “average” acoustics of the hall. Normality tests were carried out to verify whether EDT, T30 and C80 could be described with some degree of reliability with a theoretical distribution. Different results, according to the varying assumptions underlying each test, were found. Finally, an attempt was made to correlate the numerical results emerged from the statistical analysis to the perceptual sphere. Looking for “acoustical equivalent areas”, relative difference limens were considered as threshold values. No rule of thumb emerged. Finally, the significance of the usual representation through mean values and standard deviation, which may be meaningful for normal distributed data, was investigated.
Resumo:
Hochreichende Konvektion über Waldbränden ist eine der intensivsten Formen von atmosphärischer Konvektion. Die extreme Wolkendynamik mit hohen vertikalen Windgeschwindigkeiten (bis 20 m/s) bereits an der Wolkenbasis, hohen Wasserdampfübersättigungen (bis 1%) und die durch das Feuer hohen Anzahlkonzentration von Aerosolpartikeln (bis 100000 cm^-3) bilden einen besonderen Rahmen für Aerosol-Wolken Wechselwirkungen.Ein entscheidender Schritt in der mikrophysikalischen Entwicklung einer konvektiven Wolke ist die Aktivierung von Aerosolpartikeln zu Wolkentropfen. Dieser Aktivierungsprozess bestimmt die anfängliche Anzahl und Größe der Wolkentropfen und kann daher die Entwicklung einer konvektiven Wolke und deren Niederschlagsbildung beeinflussen. Die wichtigsten Faktoren, welche die anfängliche Anzahl und Größe der Wolkentropfen bestimmen, sind die Größe und Hygroskopizität der an der Wolkenbasis verfügbaren Aerosolpartikel sowie die vertikale Windgeschwindigkeit. Um den Einfluss dieser Faktoren unter pyro-konvektiven Bedingungen zu untersuchen, wurden numerische Simulationen mit Hilfe eines Wolkenpaketmodells mit detaillierter spektraler Beschreibung der Wolkenmikrophysik durchgeführt. Diese Ergebnisse können in drei unterschiedliche Bereiche abhängig vom Verhältnis zwischen vertikaler Windgeschwindigkeit und Aerosolanzahlkonzentration (w/NCN) eingeteilt werden: (1) ein durch die Aerosolkonzentration limitierter Bereich (hohes w/NCN), (2) ein durch die vertikale Windgeschwindigkeit limitierter Bereich (niedriges w/NCN) und (3) ein Übergangsbereich (mittleres w/NCN). Die Ergebnisse zeigen, dass die Variabilität der anfänglichen Anzahlkonzentration der Wolkentropfen in (pyro-) konvektiven Wolken hauptsächlich durch die Variabilität der vertikalen Windgeschwindigkeit und der Aerosolkonzentration bestimmt wird. rnUm die mikrophysikalischen Prozesse innerhalb der rauchigen Aufwindregion einer pyrokonvektiven Wolke mit einer detaillierten spektralen Mikrophysik zu untersuchen, wurde das Paketmodel entlang einer Trajektorie innerhalb der Aufwindregion initialisiert. Diese Trajektore wurde durch dreidimensionale Simulationen eines pyro-konvektiven Ereignisses durch das Model ATHAM berechnet. Es zeigt sich, dass die Anzahlkonzentration der Wolkentropfen mit steigender Aerosolkonzentration ansteigt. Auf der anderen Seite verringert sich die Größe der Wolkentropfen mit steigender Aerosolkonzentration. Die Reduzierung der Verbreiterung des Tropfenspektrums stimmt mit den Ergebnissen aus Messungen überein und unterstützt das Konzept der Unterdrückung von Niederschlag in stark verschmutzen Wolken.Mit Hilfe des Models ATHAM wurden die dynamischen und mikrophysikalischen Prozesse von pyro-konvektiven Wolken, aufbauend auf einer realistischen Parametrisierung der Aktivierung von Aerosolpartikeln durch die Ergebnisse der Aktivierungsstudie, mit zwei- und dreidimensionalen Simulationen untersucht. Ein modernes zweimomenten mikrophysikalisches Schema wurde in ATHAM implementiert, um den Einfluss der Anzahlkonzentration von Aerosolpartikeln auf die Entwicklung von idealisierten pyro-konvektiven Wolken in US Standardamtosphären für die mittleren Breiten und den Tropen zu untersuchen. Die Ergebnisse zeigen, dass die Anzahlkonzentration der Aerosolpartikel die Bildung von Regen beeinflusst. Für geringe Aerosolkonzentrationen findet die rasche Regenbildung hauptsächlich durch warme mikrophysikalische Prozesse statt. Für höhere Aerosolkonzentrationen ist die Eisphase wichtiger für die Bildung von Regen. Dies führt zu einem verspäteten Einsetzen von Niederschlag für verunreinigtere Atmosphären. Außerdem wird gezeigt, dass die Zusammensetzung der Eisnukleationspartikel (IN) einen starken Einfluss auf die dynamische und mikrophysikalische Struktur solcher Wolken hat. Bei sehr effizienten IN bildet sich Regen früher. Die Untersuchung zum Einfluss des atmosphärischen Hintergrundprofils zeigt eine geringe Auswirkung der Meteorologie auf die Sensitivität der pyro-konvektiven Wolken auf diernAerosolkonzentration. Zum Abschluss wird gezeigt, dass die durch das Feuer emittierte Hitze einen deutlichen Einfluss auf die Entwicklung und die Wolkenobergrenze von pyro-konvektive Wolken hat. Zusammenfassend kann gesagt werden, dass in dieser Dissertation die Mikrophysik von pyrokonvektiven Wolken mit Hilfe von idealisierten Simulation eines Wolkenpaketmodell mit detaillierte spektraler Mikrophysik und eines 3D Modells mit einem zweimomenten Schema im Detail untersucht wurde. Es wird gezeigt, dass die extremen Bedingungen im Bezug auf die vertikale Windgeschwindigkeiten und Aerosolkonzentrationen einen deutlichen Einfluss auf die Entwicklung von pyro-konvektiven Wolken haben.
Resumo:
Throughout this research, the whole life cycle of a building will be analyzed, with a special focus on the most common issues that affect the construction sector nowadays, such as safety. In fact, the goal is to enhance the management of the entire construction process in order to reduce the risk of accidents. The contemporary trend is that of researching new tools capable of reducing, or even eliminating, the most common mistakes that usually lead to safety risks. That is one of the main reasons why new technologies and tools have been introduced in the field. The one we will focus on is the so-called BIM: Building Information Modeling. With the term BIM we refer to wider and more complex analysis tool than a simple 3D modeling software. Through BIM technologies we are able to generate a multi-dimension 3D model which contains all the information about the project. This innovative approach aims at a better understanding and control of the project by taking into consideration the entire life cycle and resulting in a faster and more sustainable way of management. Furthermore, BIM software allows for the sharing of all the information among the different aspects of the project and among the different participants involved thus improving the cooperation and communication. In addition, BIM software utilizes smart tools that simulate and visualize the process in advance, thus preventing issues that might not have been taking into consideration during the design process. This leads to higher chances of avoiding risks, delays and cost increases. Using a hospital case study, we will apply this approach for the completion of a safety plan, with a special focus onto the construction phase.
Resumo:
GARP (Glycoprotein A Repetitions Predominant) ist ein Oberflächenrezeptor auf regulatorischen T–Zellen (TRegs), der den latenten TGF–β (Transforming Growth Factor–β) bindet. Ein Funktionsverlust von T Regs hat gravierende Autoimmunerkrankungen wie das Immunodysregulation Polyendocrinopathy Enteropathy X–linked Syndrome (IPEX), Multiple Sklerose (MS) oder Rheumatoide Arthritis (RA) zur Folge. GARP stellt über eine Erhöhung der Aktivierbarkeit von TGF–β den regulatorischen Phänotyp von TRegs sicher und inhibiert die Ausbreitung von autoreaktiven TH17 Zellen.rn In dieser Arbeit stand die Regulation von GARP selbst im Mittelpunkt. Es konnte gezeigt werden, dass es sich innerhalb der kiefertragenden Vertebraten um ein strikt konserviertes Protein handelt. Datenbankanalysen machten deutlich, dass es zuerst in basalen Knochenfischen zusammen mit anderen Komponenten der adaptiven Immunantwort auftritt. Ein 3D–Modell, welches über Homologiemodellierung erstellt wurde, gab Aufschluss über die Struktur des Rezeptors und mögliche intramolekulare Disulfidbrücken. Für in vitro Versuche wurde eine lösliche Variante von GARP durch einen Austausch der Transmembrandomäne durch C–terminale Meprin α Domänen konstruiert. Diese Variante wurde in der eukaryotischen Zellkultur zuverlässig in den Überstand sezerniert und konnte chromatografisch gereinigt werden. Mit diesem rekombinanten GARP wurden Prozessierungsversuche mit Autoimmunpathogenese assoziierten Proteasen durchgeführt. Dabei zeigte sich, dass die Serinproteasen Trypsin, Neutrophile Elastase und Plasmin, sowie die Metalloprotease MMP2 in der Lage sind, GARP vollständig zu degradieren. In TGF–β sensitiven Proliferationsuntersuchungen stellte sich heraus, dass die entstandenen Fragmente immer noch in der Lage waren die Aktivierbarkeit von TGF–β zu erhöhen. Neben der Degradierung durch die oben genannten Proteasen konnte ebenfalls beobachtet werden, dass MMP9 und Ovastacin in der Lage sind GARP spezifisch zu schneiden. Ovastacin mRNA wurde in dieser Arbeit das erste Mal außerhalb der Oocyte, in T–Zellen beschrieben. Mit GARP wurde zudem das zweite Proteinsubstrat, neben dem Zona Pellucida Protein 2 identifiziert. Das durch MMP9 erzeugte N–terminale Fragment besitzt zwar die Eigenschaft, an TGF–β zu binden, kann aber die Aktivierbarkeit von TGF–β nicht mehr wie das intakte GARP erleichtern. rnDiese Arbeit zeigte, dass GARP durch Proteolyse reguliert wird, wobei die entstehenden Fragmente unterschiedlichen Einfluss auf die Aktivierbarkeit von TGF–β haben. Dieses Wissen bildet die Grundlage für weitere Untersuchungen im translationalen Forschungsbereich, um die gewonnenen Erkenntnisse zur Immunmodulation in der Therapie verschiedener Krankheiten einsetzen zu können.rn
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.
Resumo:
Repetitive proteins (RP) of Trypanosoma cruzi are highly present in the parasite and are strongly recognized by sera from Chagas' disease patients. Flagelar Repetitive Antigen (FRA), which is expressed in all steps of the parasite life cycle, is the RP that displays the greatest number of aminoacids per repeat and has been indicated as one of the most suitable candidate for diagnostic test because of its high performance in immunoassays. Here we analyzed the influence of the number of repeats on the immunogenic and antigenic properties of the antigen. Recombinant proteins containing one, two, and four tandem repeats of FRA (FRA1, FRA2, and FRA4, respectively) were obtained and the immune response induced by an equal amount of repeats was evaluated in a mouse model. The reactivity of specific antibodies present in sera from patients naturally infected with T. cruzi was also assessed against FRA1, FRA2, and FRA4 proteins, and the relative avidity was analyzed. We determined that the number of repeats did not increase the humoral response against the antigen and this result was reproduced when the repeated motifs were alone or fused to a non-repetitive protein. By contrast, the binding affinity of specific human antibodies increases with the number of repeated motifs in FRA antigen. We then concluded that the high ability of FRA to be recognized by specific antibodies from infected individuals is mainly due to a favorable polyvalent interaction between the antigen and the antibodies. In accordance with experimental results, a 3D model was proposed and B epitope in FRA1, FRA2, and FRA4 were predicted.
Resumo:
Information management is a key aspect of successful construction projects. Having inaccurate measurements and conflicting data can lead to costly mistakes, and vague quantities can ruin estimates and schedules. Building information modeling (BIM) augments a 3D model with a wide variety of information, which reduces many sources of error and can detect conflicts before they occur. Because new technology is often more complex, it can be difficult to effectively integrate it with existing business practices. In this paper, we will answer two questions: How can BIM add value to construction projects? and What lessons can be learned from other companies that use BIM or other similar technology? Previous research focused on the technology as if it were simply a tool, observing problems that occurred while integrating new technology into existing practices. Our research instead looks at the flow of information through a company and its network, seeing all the actors as part of an ecosystem. Building upon this idea, we proposed the metaphor of an information supply chain to illustrate how BIM can add value to a construction project. This paper then concludes with two case studies. The first case study illustrates a failure in the flow of information that could have prevented by using BIM. The second case study profiles a leading design firm that has used BIM products for many years and shows the real benefits of using this program.
Resumo:
Scaphoid is one of the 8 carpal bones found adjacent to the thumb supported proximally by Radius bone. During the free fall, on outstretched hand, the impact load gets transferred to the scaphoid at its free anterior end. Unique arrangement of other carpal bones in the palm is also one of the reasons for the load to get transferred to scaphoid. About half of the total load acting upon carpal bone gets transferred to scaphoid at its distal pole. There are about 10 to 12 clinically observed fracture pattern in the scaphoid due to free fall. The aim of the study is to determine the orientation of the load, magnitude of the load and the corresponding fracture pattern. This study includes both static and dynamic finite element models validated by experiments. The scaphoid model has been prepared from CT scans of a 27 year old person. The 2D slices of the CT scans have been converted to 3D model by using MIMICS software. There are four cases of loading studied which are considered to occur clinically more frequently. In case (i) the load is applied at the posterior end at distal pole whereas in case (ii), (iii) and (iv), the load is applied at anterior end at different directions. The model is given a fixed boundary condition at the region which is supported by Radius bone during the impact. Same loading and boundary conditions have been used in both static and dynamic explicit finite element analysis. The site of fracture initiation and path of fracture propagation have been identified by using max principal stress / gradient and max principal strain / gradient criterion respectively in static and dynamic explicit finite element analysis. Static and dynamic impact experiments were performed on the polyurethane foam specimens to validate the finite element results. Experimental results such as load at fracture, site of fracture initiation and path of fracture propagation have been compared with the results of finite element analysis. Four different types of fracture patterns observed in clinical studies have been identified in this study.
Resumo:
Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.