27 resultados para Dimensional analysis.

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

60.00% 60.00%

Publicador:

Resumo:

La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fracture mechanics plays an important role in the material science, structure design and industrial production due to the failure of materials and structures are paid high attention in human activities. This dissertation, concentrates on some of the fractural aspects of shaft and composite which have being increasingly used in modern structures, consists four chapters within two parts. Chapters 1 to 4 are included in part 1. In the first chapter, the basic knowledge about the stress and displacement fields in the vicinity of a crack tip is introduced. A review involves the general methods of calculating stress intensity factors are presented. In Chapter 2, two simple engineering methods for a fast and close approximation of stress intensity factors of cracked or notched beams under tension, bending moment, shear force, as well as torque are presented. New formulae for calculating the stress intensity factors are proposed. One of the methods named Section Method is improved and applied to the three dimensional analysis of cracked circular section for calculating stress intensity factors. The comparisons between the present results and the solutions calculated by ABAQUS for single mode and mixed mode are studied. In chapter 3, fracture criteria for a crack subjected to mixed mode loading of two-dimension and three-dimension are reviewed. The crack extension angle for single mode and mixed mode, and the critical loading domain obtained by SEDF and MTS are compared. The effects of the crack depth and the applied force ratio on the crack propagation angle and the critical loading are investigated. Three different methods calculating the crack initiation angle for three-dimension analysis of various crack depth and crack position are compared. It should be noted that the stress intensity factors used in the criteria are calculated in section 2.1.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Gleno-humeral joint (GHJ) is the most mobile joint of the human body. This is related to theincongr uence between the large humeral head articulating with the much smaller glenoid (ratio 3:1). The GHJ laxity is the ability of the humeral head to be passively translated on the glenoid fossa and, when physiological, it guarantees the normal range of motion of the joint. Three-dimensional GHJ linear displacements have been measured, both in vivo and in vitro by means of different instrumental techniques. In vivo gleno-humeral displacements have been assessed by means of stereophotogrammetry, electromagnetic tracking sensors, and bio-imaging techniques. Both stereophotogrammetric systems and electromagnetic tracking devices, due to the deformation of the soft tissues surrounding the bones, are not capable to accurately assess small displacements, such as gleno-humeral joint translations. The bio-imaging techniques can ensure for an accurate joint kinematic (linear and angular displacement) description, but, due to the radiation exposure, most of these techniques, such as computer tomography or fluoroscopy, are invasive for patients. Among the bioimaging techniques, an alternative which could provide an acceptable level of accuracy and that is innocuous for patients is represented by magnetic resonance imaging (MRI). Unfortunately, only few studies have been conducted for three-dimensional analysis and very limited data is available in situations where preset loads are being applied. The general aim of this doctoral thesis is to develop a non-invasive methodology based on open-MRI for in-vivo evaluation of the gleno-humeral translation components in healthy subjects under the application of external loads.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditionally Poverty has been measured by a unique indicator, income, assuming this was the most relevant dimension of poverty. Sen’s approach has dramatically changed this idea shedding light over the existence of many more dimensions and over the multifaceted nature of poverty; poverty cannot be represented by a unique indicator that only can evaluate a specific aspect of poverty. This thesis tracks an ideal path along with the evolution of the poverty analysis. Starting from the unidimensional analysis based on income and consumptions, this research enter the world of multidimensional analysis. After reviewing the principal approaches, the Foster and Alkire method is critically analyzed and implemented over data from Kenya. A step further is moved in the third part of the thesis, introducing a new approach to multidimensional poverty assessment: the resilience analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Doctoral Thesis focuses on the study of individual behaviours as a result of organizational affiliation. The objective is to assess the Entrepreneurial Orientation of individuals proving the existence of a set of antecedents to that measure returning a structural model of its micro-foundation. Relying on the developed measurement model, I address the issue whether some Entrepreneurs experience different behaviours as a result of their academic affiliation, comparing a sample of ‘Academic Entrepreneurs’ to a control sample of ‘Private Entrepreneurs’ affiliated to a matched sample of Academic Spin-offs and Private Start-ups. Building on the Theory of the Planned Behaviour, proposed by Ajzen (1991), I present a model of causal antecedents of Entrepreneurial Orientation on constructs extensively used and validated, both from a theoretical and empirical perspective, in sociological and psychological studies. I focus my investigation on five major domains: (a) Situationally Specific Motivation, (b) Personal Traits and Characteristics, (c) Individual Skills, (d) Perception of the Business Environment and (e) Entrepreneurial Orientation Related Dimensions. I rely on a sample of 200 Entrepreneurs, affiliated to a matched sample of 72 Academic Spin-offs and Private Start-ups. Firms are matched by Industry, Year of Establishment and Localization and they are all located in the Emilia Romagna region, in northern Italy. I’ve gathered data by face to face interviews and used a Structural Equation Modeling technique (Lisrel 8.80, Joreskog, K., & Sorbom, D. 2006) to perform the empirical analysis. The results show that Entrepreneurial Orientation is a multi-dimensional micro-founded construct which can be better represented by a Second-Order Model. The t-tests on the latent means reveal that the Academic Entrepreneurs differ in terms of: Risk taking, Passion, Procedural and Organizational Skills, Perception of the Government, Context and University Supports. The Structural models also reveal that the main differences between the two groups lay in the predicting power of Technical Skills, Perceived Context Support and Perceived University Support in explaining the Entrepreneurial Orientation Related Dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this manuscript concerns the structural characterization of graphene membranes and single-walled carbon nanotubes (SWCNTs). The experimental investigation was performed using a wide range of transmission electron microscopy (TEM) techniques, from conventional imaging and diffraction, to advanced interferometric methods, like electron holography and Geometric Phase Analysis (GPA), using a low-voltage optical set-up, to reduce radiation damage in the samples. Electron holography was used to successfully measure the mean electrostatic potential of an isolated SWCNT and that of a mono-atomically thin graphene crystal. The high accuracy achieved in the phase determination, made it possible to measure, for the first time, the valence-charge redistribution induced by the lattice curvature in an individual SWCNT. A novel methodology for the 3D reconstruction of the waviness of a 2D crystal membrane has been developed. Unlike other available TEM reconstruction techniques, like tomography, this new one requires processing of just a single HREM micrograph. The modulations of the inter-planar distances in the HREM image are measured using Geometric Phase Analysis, and used to recover the waviness of the crystal. The method was applied to the case of a folded FGC, and a height variation of 0.8 nm of the surface was successfully determined with nanometric lateral resolution. The adhesion of SWCNTs to the surface of graphene was studied, mixing shortened SWCNTs of different chiralities and FGC membranes. The spontaneous atomic match of the two lattices was directly imaged using HREM, and we found that graphene membranes act as tangential nano-sieves, preferentially grafting achiral tubes to their surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural hazard related to the volcanic activity represents a potential risk factor, particularly in the vicinity of human settlements. Besides to the risk related to the explosive and effusive activity, the instability of volcanic edifices may develop into large landslides often catastrophically destructive, as shown by the collapse of the northern flank of Mount St. Helens in 1980. A combined approach was applied to analyse slope failures that occurred at Stromboli volcano. SdF slope stability was evaluated by using high-resolution multi-temporal DTMMs and performing limit equilibrium stability analyses. High-resolution topographical data collected with remote sensing techniques and three-dimensional slope stability analysis play a key role in understanding instability mechanism and the related risks. Analyses carried out on the 2002–2003 and 2007 Stromboli eruptions, starting from high-resolution data acquired through airborne remote sensing surveys, permitted the estimation of the lava volumes emplaced on the SdF slope and contributed to the investigation of the link between magma emission and slope instabilities. Limit Equilibrium analyses were performed on the 2001 and 2007 3D models, in order to simulate the slope behavior before 2002-2003 landslide event and after the 2007 eruption. Stability analyses were conducted to understand the mechanisms that controlled the slope deformations which occurred shortly after the 2007 eruption onset, involving the upper part of slope. Limit equilibrium analyses applied to both cases yielded results which are congruent with observations and monitoring data. The results presented in this work undoubtedly indicate that hazard assessment for the island of Stromboli should take into account the fact that a new magma intrusion could lead to further destabilisation of the slope, which may be more significant than the one recently observed because it will affect an already disarranged deposit and fractured and loosened crater area. The two-pronged approach based on the analysis of 3D multi-temporal mapping datasets and on the application of LE methods contributed to better understanding volcano flank behaviour and to be prepared to undertake actions aimed at risk mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to investigate the strength and structure of the magnetized medium surrounding radio galaxies via observations of the Faraday effect. This study is based on an analysis of the polarization properties of radio galaxies selected to have a range of morphologies (elongated tails, or lobes with small axial ratios) and to be located in a variety of environments (from rich cluster core to small group). The targets include famous objects like M84 and M87. A key aspect of this work is the combination of accurate radio imaging with high-quality X-ray data for the gas surrounding the sources. Although the focus of this thesis is primarily observational, I developed analytical models and performed two- and three-dimensional numerical simulations of magnetic fields. The steps of the thesis are: (a) to analyze new and archival observations of Faraday rotation measure (RM) across radio galaxies and (b) to interpret these and existing RM images using sophisticated two and three-dimensional Monte Carlo simulations. The approach has been to select a few bright, very extended and highly polarized radio galaxies. This is essential to have high signal-to-noise in polarization over large enough areas to allow computation of spatial statistics such as the structure function (and hence the power spectrum) of rotation measure, which requires a large number of independent measurements. New and archival Very Large Array observations of the target sources have been analyzed in combination with high-quality X-ray data from the Chandra, XMM-Newton and ROSAT satellites. The work has been carried out by making use of: 1) Analytical predictions of the RM structure functions to quantify the RM statistics and to constrain the power spectra of the RM and magnetic field. 2) Two-dimensional Monte Carlo simulations to address the effect of an incomplete sampling of RM distribution and so to determine errors for the power spectra. 3) Methods to combine measurements of RM and depolarization in order to constrain the magnetic-field power spectrum on small scales. 4) Three-dimensional models of the group/cluster environments, including different magnetic field power spectra and gas density distributions. This thesis has shown that the magnetized medium surrounding radio galaxies appears more complicated than was apparent from earlier work. Three distinct types of magnetic-field structure are identified: an isotropic component with large-scale fluctuations, plausibly associated with the intergalactic medium not affected by the presence of a radio source; a well-ordered field draped around the front ends of the radio lobes and a field with small-scale fluctuations in rims of compressed gas surrounding the inner lobes, perhaps associated with a mixing layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 2D Unconstrained Third Order Shear Deformation Theory (UTSDT) is presented for the evaluation of tangential and normal stresses in moderately thick functionally graded conical and cylindrical shells subjected to mechanical loadings. Several types of graded materials are investigated. The functionally graded material consists of ceramic and metallic constituents. A four parameter power law function is used. The UTSDT allows the presence of a finite transverse shear stress at the top and bottom surfaces of the graded shell. In addition, the initial curvature effect included in the formulation leads to the generalization of the present theory (GUTSDT). The Generalized Differential Quadrature (GDQ) method is used to discretize the derivatives in the governing equations, the external boundary conditions and the compatibility conditions. Transverse and normal stresses are also calculated by integrating the three dimensional equations of equilibrium in the thickness direction. In this way, the six components of the stress tensor at a point of the conical or cylindrical shell or panel can be given. The initial curvature effect and the role of the power law functions are shown for a wide range of functionally conical and cylindrical shells under various loading and boundary conditions. Finally, numerical examples of the available literature are worked out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.