929 resultados para computational model
Resumo:
Pesticides applications have been described by many researches as a very inefficient process. In some cases, there are reports that only 0.02% of the applied products are used for the effective control of the problem. The main factor that influences pesticides applications is the droplet size formed on spraying nozzles. Many parameters affects the dynamic of the droplets, like wind, temperature, relative humidity, and others. Small droplets are biologically more active, but they are affected by evaporation and drift. On the other hand, the great droplets do not promote a good distribution of the product on the target. In this sense, associated with the risk of non target areas contamination and with the high costs involved in applications, the knowledge of the droplet size is of fundamental importance in the application technology. When sophisticated technology for droplets analysis is unavailable, is common the use of artificial targets like water-sensitive paper to sample droplets. On field sampling, water-sensitive papers are placed on the trials where product will be applied. When droplets impinging on it, the yellow surface of this paper will be stained dark blue, making easy their recognition. Collected droplets on this papers have different kinds of sizes. In this sense, the determination of the droplet size distribution gives a mass distribution of the material and so, the efficience of the application of the product. The stains produced by droplets shows a spread factor proportional to their respectives initial sizes. One of methodologies to analyse the droplets is a counting and measure of the droplets made in microscope. The Porton N-G12 graticule, that shows equaly spaces class intervals on geometric progression of square 2, are coulpled to the lens of the microscope. The droplet size parameters frequently used are the Volumetric Median Diameter (VMD) and the Numeric Median Diameter. On VMD value, a representative droplets sample is divided in two equal parts of volume, in such away one part contains droplets of sizes smaller than VMD and the other part contains droplets of sizes greater that VMD. The same process is done to obtaining the NMD, which divide the sample in two equal parts in relation to the droplets size. The ratio between VMD and NMD allows the droplets uniformity evaluation. After that, the graphics of accumulated probability of the volume and size droplets are plotted on log scale paper (accumulated probability versus median diameter of each size class). The graphics provides the NMD on the x-axes point corresponding to the value of 50% founded on the y-axes. All this process is very slow and subjected to operator error. So, in order to decrease the difficulty envolved with droplets measuring it was developed a numeric model, implemented on easy and accessfull computational language, which allows approximate VMD and NMD values, with good precision. The inputs to this model are the frequences of the droplets sizes colected on the water-sensitive paper, observed on the Porton N-G12 graticule fitted on microscope. With these data, the accumulated distribution of the droplet medium volumes and sizes are evaluated. The graphics obtained by plotting this distributions allow to obtain the VMD and NMD using linear interpolation, seen that on the middle of the distributions the shape of the curves are linear. These values are essential to evaluate the uniformity of droplets and to estimate the volume deposited on the observed paper by the density (droplets/cm2). This methodology to estimate the droplets volume was developed by 11.0.94.224 Project of the CNPMA/EMBRAPA. Observed data of herbicides aerial spraying samples, realized by Project on Pelotas/RS county, were used to compare values obtained manual graphic method and with those obtained by model has shown, with great precision, the values of VMD and NMD on each sampled collector, allowing to estimate a quantities of deposited product and, by consequence, the quantities losses by drifty. The graphics of variability of VMD and NMD showed that the quantity of droplets that reachs the collectors had a short dispersion, while the deposited volume shows a great interval of variation, probably because the strong action of air turbulence on the droplets distribution, enfasizing the necessity of a deeper study to verify this influences on drift.
Resumo:
Atrial fibrillation is associated with a five-fold increase in the risk of cerebrovascular events,being responsible of 15-18% of all strokes.The morphological and functional remodelling of the left atrium caused by atrial fibrillation favours blood stasis and, consequently, stroke risk. In this context, several clinical studies suggest that stroke risk stratification could be improved by using haemodynamic information on the left atrium (LA) and the left atrial appendage (LAA). The goal of this study was to develop a personalized computational fluid-dynamics (CFD) model of the left atrium which could clarify the haemodynamic implications of atrial fibrillation on a patient specific basis. The developed CFD model was first applied to better understand the role of LAA in stroke risk. Infact, the interplay of the LAA geometric parameters such as LAA length, tortuosity, surface area and volume with the fluid-dynamics parameters and the effects of the LAA closure have not been investigated. Results demonstrated the capabilities of the CFD model to reproduce the real physiological behaviour of the blood flow dynamics inside the LA and the LAA. Finally, we determined that the fluid-dynamics parameters enhanced in this research project could be used as new quantitative indexes to describe the different types of AF and open new scenarios for the patient-specific stroke risk stratification.
Resumo:
This project aims at deepening the understanding of the molecular basis of the phenotypic heterogeneity of prion diseases. Prion diseases represent the first and clearest example of “protein misfolding diseases”, that are all the neurodegenerative diseases caused by the accumulation of misfolded proteins in the central nervous system. In the field of protein misfolding diseases, the term “strain” describes the heterogeneity observed among the same disease in the clinical and pathologic progression, biochemical features of the aggregated protein, conformational memory and pattern of lesions. In this work, the two most common strains of Creutzfeldt-Jakob Disease (CJD), named MM1 and VV2, were analyzed. This thesis investigates the strain paradigm with the production of new multi omic data, and, on such data, appropriate computational analysis combining bioinformatics, data science and statistical approaches was performed. In this work, genomic and transcriptomic profiling allowed an improved characterization of the molecular features of the two most common strains of CJD, identifying multiple possible genetic contributors to the disease and finding several shared impaired pathways between the VV2 strain and Parkinson Disease. On the epigenomic level, the tridimensional chromatin folding in peripheral immune cells of CJD patients at onset and of healthy controls was investigated with Hi-C. While being the first application of this very advanced technology in prion diseases and one of the first in general in neurobiology, this work found a significant and diffuse loss of genomic interactions in immune cells of CJD patients at disease onset, particularly in the PRNP locus, suggesting a possible impairment of chromatin conformation in the disease. The results of this project represent a novelty in the state of the art in this field, both from a biomedical and technological point of view.
Resumo:
The weight-transfer effect, consisting of the change in dynamic load distribution between the front and the rear tractor axles, is one of the most impairing phenomena for the performance, comfort, and safety of agricultural operations. Excessive weight transfer from the front to the rear tractor axle can occur during operation or maneuvering of implements connected to the tractor through the three-point hitch (TPH). In this respect, an optimal design of the TPH can ensure better dynamic load distribution and ultimately improve operational performance, comfort, and safety. In this study, a computational design tool (The Optimizer) for the determination of a TPH geometry that minimizes the weight-transfer effect is developed. The Optimizer is based on a constrained minimization algorithm. The objective function to be minimized is related to the tractor front-to-rear axle load transfer during a simulated reference maneuver performed with a reference implement on a reference soil. Simulations are based on a 3-degrees-of-freedom (DOF) dynamic model of the tractor-TPH-implement aggregate. The inertial, elastic, and viscous parameters of the dynamic model were successfully determined through a parameter identification algorithm. The geometry determined by the Optimizer complies with the ISO-730 Standard functional requirements and other design requirements. The interaction between the soil and the implement during the simulated reference maneuver was successfully validated against experimental data. Simulation results show that the adopted reference maneuver is effective in triggering the weight-transfer effect, with the front axle load exhibiting a peak-to-peak value of 27.1 kN during the maneuver. A benchmark test was conducted starting from four geometries of a commercially available TPH. As result, all the configurations were optimized by above 10%. The Optimizer, after 36 iterations, was able to find an optimized TPH geometry which allows to reduce the weight-transfer effect by 14.9%.
Resumo:
The study of ancient, undeciphered scripts presents unique challenges, that depend both on the nature of the problem and on the peculiarities of each writing system. In this thesis, I present two computational approaches that are tailored to two different tasks and writing systems. The first of these methods is aimed at the decipherment of the Linear A afraction signs, in order to discover their numerical values. This is achieved with a combination of constraint programming, ad-hoc metrics and paleographic considerations. The second main contribution of this thesis regards the creation of an unsupervised deep learning model which uses drawings of signs from ancient writing system to learn to distinguish different graphemes in the vector space. This system, which is based on techniques used in the field of computer vision, is adapted to the study of ancient writing systems by incorporating information about sequences in the model, mirroring what is often done in natural language processing. In order to develop this model, the Cypriot Greek Syllabary is used as a target, since this is a deciphered writing system. Finally, this unsupervised model is adapted to the undeciphered Cypro-Minoan and it is used to answer open questions about this script. In particular, by reconstructing multiple allographs that are not agreed upon by paleographers, it supports the idea that Cypro-Minoan is a single script and not a collection of three script like it was proposed in the literature. These results on two different tasks shows that computational methods can be applied to undeciphered scripts, despite the relatively low amount of available data, paving the way for further advancement in paleography using these methods.
Resumo:
Using Computational Wind Engineering, CWE, for solving wind-related problems is still a challenging task today, mainly due to the high computational cost required to obtain trustworthy simulations. In particular, the Large Eddy Simulation, LES, has been widely used for evaluating wind loads on buildings. The present thesis assesses the capability of LES as a design tool for wind loading predictions through three cases. The first case is using LES for simulating the wind field around a ground-mounted rectangular prism in Atmospheric Boundary Layer (ABL) flow. The numerical results are validated with experimental results for seven wind attack angles, giving a global understanding of the model performance. The case with the worst model behaviour is investigated, including the spatial distribution of the pressure coefficients and their discrepancies with respect to experimental results. The effects of some numerical parameters are investigated for this case to understand their effectiveness in modifying the obtained numerical results. The second case is using LES for investigating the wind effects on a real high-rise building, aiming at validating the performance of LES as a design tool in practical applications. The numerical results are validated with the experimental results in terms of the distribution of the pressure statistics and the global forces. The mesh sensitivity and the computational cost are discussed. The third case is using LES for studying the wind effects on the new large-span roof over the Bologna stadium. The dynamic responses are analyzed and design envelopes for the structure are obtained. Although it is a numerical simulation before the traditional wind tunnel tests, i.e. the validation of the numerical results are not performed, the preliminary evaluations can effectively inform later investigations and provide the final design processes with deeper confidence regarding the absence of potentially unexpected behaviours.
Resumo:
The cardiomyocytes are very complex consisting of many interlinked non-linear regulatory mechanisms between electrical excitation and mechanical contraction. Thus given a integrated electromechanically coupled system it becomes hard to understand the individual contributor of cardiac electrics and mechanics under both physiological and pathological conditions. Hence, to identify the causal relationship or to predict the responses in a integrated system the use of computational modeling can be beneficial. Computational modeling is a powerful tool that provides complete control of parameters along with the visibility of all the individual components of the integrated system. The advancement of computational power has made it possible to simulate the models in a short timeframe, providing the possibility of increased predictive power of the integrated system. My doctoral thesis is focused on the development of electromechanically integrated human atrial cardiomyocyte model with proper consideration of feedforward and feedback pathways.
Resumo:
Ground deformation provides valuable insights on subsurface processes with pattens reflecting the characteristics of the source at depth. In active volcanic sites displacements can be observed in unrest phases; therefore, a correct interpretation is essential to assess the hazard potential. Inverse modeling is employed to obtain quantitative estimates of parameters describing the source. However, despite the robustness of the available approaches, a realistic imaging of these reservoirs is still challenging. While analytical models return quick but simplistic results, assuming an isotropic and elastic crust, more sophisticated numerical models, accounting for the effects of topographic loads, crust inelasticity and structural discontinuities, require much higher computational effort and information about the crust rheology may be challenging to infer. All these approaches are based on a-priori source shape constraints, influencing the solution reliability. In this thesis, we present a new approach aimed at overcoming the aforementioned limitations, modeling sources free of a-priori shape constraints with the advantages of FEM simulations, but with a cost-efficient procedure. The source is represented as an assembly of elementary units, consisting in cubic elements of a regular FE mesh loaded with a unitary stress tensors. The surface response due to each of the six stress tensor components is computed and linearly combined to obtain the total displacement field. In this way, the source can assume potentially any shape. Our tests prove the equivalence of the deformation fields due to our assembly and that of corresponding cavities with uniform boundary pressure. Our ability to simulate pressurized cavities in a continuum domain permits to pre-compute surface responses, avoiding remeshing. A Bayesian trans-dimensional inversion algorithm implementing this strategy is developed. 3D Voronoi cells are used to sample the model domain, selecting the elementary units contributing to the source solution and those remaining inactive as part of the crust.
Resumo:
Osteoporosis is one of the major causes of mortality among the elderly. Nowadays, areal bone mineral density (aBMD) is used as diagnostic criteria for osteoporosis; however, this is a moderate predictor of the femur fracture risk and does not capture the effect of some anatomical and physiological properties on the bone strength estimation. Data from past research suggest that most fragility femur fractures occur in patients with aBMD values outside the pathological range. Subject-specific finite element models derived from computed tomography data are considered better tools to non-invasively assess hip fracture risk. In particular, the Bologna Biomechanical Computed Tomography (BBCT) is an In Silico methodology that uses a subject specific FE model to predict bone strength. Different studies demonstrated that the modeling pipeline can increase predictive accuracy of osteoporosis detection and assess the efficacy of new antiresorptive drugs. However, one critical aspect that must be properly addressed before using the technology in the clinical practice, is the assessment of the model credibility. The aim of this study was to define and perform verification and uncertainty quantification analyses on the BBCT methodology following the risk-based credibility assessment framework recently proposed in the VV-40 standard. The analyses focused on the main verification tests used in computational solid mechanics: force and moment equilibrium check, mesh convergence analyses, mesh quality metrics study, evaluation of the uncertainties associated to the definition of the boundary conditions and material properties mapping. Results of these analyses showed that the FE model is correctly implemented and solved. The operation that mostly affect the model results is the material properties mapping step. This work represents an important step that, together with the ongoing clinical validation activities, will contribute to demonstrate the credibility of the BBCT methodology.
Resumo:
Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.
Resumo:
When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.
Resumo:
Historic vaulted masonry structures often need strengthening interventions that can effectively improve their structural performance, especially during seismic events, and at the same time respect the existing setting and the modern conservation requirements. In this context, the use of innovative materials such as fiber-reinforced composite materials has been shown as an effective solution that can satisfy both aspects. This work aims to provide insight into the computational modeling of a full-scale masonry vault strengthened by fiber-reinforced composite materials and analyze the influence of the arrangement of the reinforcement on the efficiency of the intervention. At first, a parametric model of a cross vault focusing on a realistic representation of its micro-geometry is proposed. Then numerical modeling, simulating the pushover analyses, of several barrel vaults reinforced with different reinforcement configurations is performed. Finally, the results are collected and discussed in terms of force-displacement curves obtained for each proposed configuration.
Resumo:
Understanding the molecular mechanisms of oral carcinogenesis will yield important advances in diagnostics, prognostics, effective treatment, and outcome of oral cancer. Hence, in this study we have investigated the proteomic and peptidomic profiles by combining an orthotopic murine model of oral squamous cell carcinoma (OSCC), mass spectrometry-based proteomics and biological network analysis. Our results indicated the up-regulation of proteins involved in actin cytoskeleton organization and cell-cell junction assembly events and their expression was validated in human OSCC tissues. In addition, the functional relevance of talin-1 in OSCC adhesion, migration and invasion was demonstrated. Taken together, this study identified specific processes deregulated in oral cancer and provided novel refined OSCC-targeting molecules.
Resumo:
Two single crystalline surfaces of Au vicinal to the (111) plane were modified with Pt and studied using scanning tunneling microscopy (STM) and X-ray photoemission spectroscopy (XPS) in ultra-high vacuum environment. The vicinal surfaces studied are Au(332) and Au(887) and different Pt coverage (θPt) were deposited on each surface. From STM images we determine that Pt deposits on both surfaces as nanoislands with heights ranging from 1 ML to 3 ML depending on θPt. On both surfaces the early growth of Pt ad-islands occurs at the lower part of the step edge, with Pt ad-atoms being incorporated into the steps in some cases. XPS results indicate that partial alloying of Pt occurs at the interface at room temperature and at all coverage, as suggested by the negative chemical shift of Pt 4f core line, indicating an upward shift of the d-band center of the alloyed Pt. Also, the existence of a segregated Pt phase especially at higher coverage is detected by XPS. Sample annealing indicates that the temperature rise promotes a further incorporation of Pt atoms into the Au substrate as supported by STM and XPS results. Additionally, the catalytic activity of different PtAu systems reported in the literature for some electrochemical reactions is discussed considering our findings.
Resumo:
Congenital diaphragmatic hernia (CDH) is associated with pulmonary hypertension which is often difficult to manage, and a significant cause of morbidity and mortality. In this study, we have used a rabbit model of CDH to evaluate the effects of BAY 60-2770 on the in vitro reactivity of left pulmonary artery. CDH was performed in New Zealand rabbit fetuses (n = 10 per group) and compared to controls. Measurements of body, total and left lung weights (BW, TLW, LLW) were done. Pulmonary artery rings were pre-contracted with phenylephrine (10 μM), after which cumulative concentration-response curves to glyceryl trinitrate (GTN; NO donor), tadalafil (PDE5 inhibitor) and BAY 60-2770 (sGC activator) were obtained as well as the levels of NO (NO3/NO2). LLW, TLW and LBR were decreased in CDH (p < 0.05). In left pulmonary artery, the potency (pEC50) for GTN was markedly lower in CDH (8.25 ± 0.02 versus 9.27 ± 0.03; p < 0.01). In contrast, the potency for BAY 60-2770 was markedly greater in CDH (11.7 ± 0.03 versus 10.5 ± 0.06; p < 0.01). The NO2/NO3 levels were 62 % higher in CDH (p < 0.05). BAY 60-2770 exhibits a greater potency to relax the pulmonary artery in CDH, indicating a potential use for pulmonary hypertension in this disease.