945 resultados para Finite analysis analysis


Relevância:

50.00% 50.00%

Publicador:

Resumo:

The present work consists of a detailed numerical analysis of a 4-way joint made of a precast column and two partially precast beams. The structure has been previously built and experimentally analyzed through a series of cyclic loads at the Laboratory of Tests on Structures (Laboratorio di Prove su Strutture, La. P. S.) of the University of Bologna. The aim of this work is to design a 3D model of the joint and then apply the techniques of nonlinear finite element analysis (FEA) to computationally reproduce the behavior of the structure under cyclic loads. Once the model has been calibrated to correctly emulate the joint, it is possible to obtain new insights useful to understand and explain the physical phenomena observed in the laboratory and to describe the properties of the structure, such as the cracking patterns, the force-displacement and the moment-curvature relations, as well as the deformations and displacements of the various elements composing the joint.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In recent papers, Wied and his coauthors have introduced change-point procedures to detect and estimate structural breaks in the correlation between time series. To prove the asymptotic distribution of the test statistic and stopping time as well as the change-point estimation rate, they use an extended functional Delta method and assume nearly constant expectations and variances of the time series. In this thesis, we allow asymptotically infinitely many structural breaks in the means and variances of the time series. For this setting, we present test statistics and stopping times which are used to determine whether or not the correlation between two time series is and stays constant, respectively. Additionally, we consider estimates for change-points in the correlations. The employed nonparametric statistics depend on the means and variances. These (nuisance) parameters are replaced by estimates in the course of this thesis. We avoid assuming a fixed form of these estimates but rather we use "blackbox" estimates, i.e. we derive results under assumptions that these estimates fulfill. These results are supplement with examples. This thesis is organized in seven sections. In Section 1, we motivate the issue and present the mathematical model. In Section 2, we consider a posteriori and sequential testing procedures, and investigate convergence rates for change-point estimation, always assuming that the means and the variances of the time series are known. In the following sections, the assumptions of known means and variances are relaxed. In Section 3, we present the assumptions for the mean and variance estimates that we will use for the mean in Section 4, for the variance in Section 5, and for both parameters in Section 6. Finally, in Section 7, a simulation study illustrates the finite sample behaviors of some testing procedures and estimates.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The objective of the study is to identify the 3D behaviour of an adhesive in an assembly, and to take into account the effect of ageing in a marine environment. To that end, three different tests were employed. Gravimetric analyses were used to determine the water diffusion kinetics in the adhesive. Bulk tensile tests were performed to highlight the effects of humid ageing on the adhesive behaviour. Modified Arcan tests were performed for several ageing times to obtain the experimental database which was necessary to identify constitutive models. A Mahnken-Schlimmer type model was determined for the unaged state according to a procedure developed in a previous study. This identification used inverse techniques. It was based on the unaged modified Arcan results and on a coupling between an optimisation routine and finite-element analysis. Then, a global inverse identification procedure was developed. Its aim was to relate the unaged parameters to the moisture concentration and overcome the difficulties usually associated with ageing of bonded assemblies in a humid environment: a non-uniformity of the stress state and a gradient of mechanical properties in the adhesive. This procedure was similar to the one used in the first part but needed modified Arcan results for several ageing times. It also required an initial assumption for the evolution of the Mahnken-Schlimmer parameters with the moisture concentration.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We consider the a priori error analysis of hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form under weak assumptions on the mesh design and the local finite element spaces employed. In particular, we prove a priori hp-error bounds for linear target functionals of the solution, on (possibly) anisotropic computational meshes with anisotropic tensor-product polynomial basis functions. The theoretical results are illustrated by a numerical experiment.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Osteotomy or bone cutting is a common procedure in orthopaedic surgery, mainly in the treatment of fractures and reconstructive surgery. However, the excessive heat produced during the bone drilling process is a problem that counters the benefits of this type of surgery, because it can result in thermal osteonecrosis, bone reabsorption and damage the osseointegration of implants. The analysis of different drilling parameters and materials can allow to decrease the temperature during the bone drilling process and contribute to a greater success of this kind of surgical interventions. The main goal of this study was to build a numerical three-dimensional model to simulate the drilling process considering the type of bone, the influence of cooling and the bone density of the different composite materials with similar mechanical properties to the human bone and generally used in experimental biomechanics. The numerical methodology was coupled with an experimental methodology. The use of cooling proved to be essential to decrease the material damage during the drilling process. It was concluded that the materials with less porosity and density present less damage in drilling process. The developed numerical model proved to be a great tool in this kind of analysis. © 2016, The Brazilian Society of Mechanical Sciences and Engineering.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this work the split-field finite-difference time-domain method (SF-FDTD) has been extended for the analysis of two-dimensionally periodic structures with third-order nonlinear media. The accuracy of the method is verified by comparisons with the nonlinear Fourier Modal Method (FMM). Once the formalism has been validated, examples of one- and two-dimensional nonlinear gratings are analysed. Regarding the 2D case, the shifting in resonant waveguides is corroborated. Here, not only the scalar Kerr effect is considered, the tensorial nature of the third-order nonlinear susceptibility is also included. The consideration of nonlinear materials in this kind of devices permits to design tunable devices such as variable band filters. However, the third-order nonlinear susceptibility is usually small and high intensities are needed in order to trigger the nonlinear effect. Here, a one-dimensional CBG is analysed in both linear and nonlinear regime and the shifting of the resonance peaks in both TE and TM are achieved numerically. The application of a numerical method based on the finite- difference time-domain method permits to analyse this issue from the time domain, thus bistability curves are also computed by means of the numerical method. These curves show how the nonlinear effect modifies the properties of the structure as a function of variable input pump field. When taking the nonlinear behaviour into account, the estimation of the electric field components becomes more challenging. In this paper, we present a set of acceleration strategies based on parallel software and hardware solutions.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The focus of this research is to explore the applications of the finite difference formulation based on the latency insertion method (LIM) to the analysis of circuit interconnects. Special attention is devoted to addressing the issues that arise in very large networks such as on-chip signal and power distribution networks. We demonstrate that the LIM has the power and flexibility to handle various types of analysis required at different stages of circuit design. The LIM is particularly suitable for simulations of very large scale linear networks and can significantly outperform conventional circuit solvers (such as SPICE).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The accurate prediction of stress histories for the fatigue analysis is of utmost importance for the design process of wind turbine rotor blades. As detailed, transient, and geometrically non-linear three-dimensional finite element analyses are computationally weigh too expensive, it is commonly regarded sufficient to calculate the stresses with a geometrically linear analysis and superimpose different stress states in order to obtain the complete stress histories. In order to quantify the error from geometrically linear simulations for the calculation of stress histories and to verify the practical applicability of the superposition principal in fatigue analyses, this paper studies the influence of geometric non-linearity in the example of a trailing edge bond line, as this subcomponent suffers from high strains in span-wise direction. The blade under consideration is that of the IWES IWT-7.5-164 reference wind turbine. From turbine simulations the highest edgewise loading scenario from the fatigue load cases is used as the reference. A 3D finite element model of the blade is created and the bond line fatigue assessment is performed according to the GL certification guidelines in its 2010 edition, and in comparison to the latest DNV GL standard from end of 2015. The results show a significant difference between the geometrically linear and non-linear stress analyses when the bending moments are approximated via a corresponding external loading, especially in case of the 2010 GL certification guidelines. This finding emphasizes the demand to reconsider the application of the superposition principal in fatigue analyses of modern flexible rotor blades, where geometrical nonlinearities become significant. In addition, a new load application methodology is introduced that reduces the geometrically non-linear behaviour of the blade in the finite element analysis.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper is concerned with a stochastic SIR (susceptible-infective-removed) model for the spread of an epidemic amongst a population of individuals, with a random network of social contacts, that is also partitioned into households. The behaviour of the model as the population size tends to infinity in an appropriate fashion is investigated. A threshold parameter which determines whether or not an epidemic with few initial infectives can become established and lead to a major outbreak is obtained, as are the probability that a major outbreak occurs and the expected proportion of the population that are ultimately infected by such an outbreak, together with methods for calculating these quantities. Monte Carlo simulations demonstrate that these asymptotic quantities accurately reflect the behaviour of finite populations, even for only moderately sized finite populations. The model is compared and contrasted with related models previously studied in the literature. The effects of the amount of clustering present in the overall population structure and the infectious period distribution on the outcomes of the model are also explored.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

One of the policy puzzles faced in India during the last two and half decades has been the weak association between output and labor markets, particularly in the manufacturing sector. In this research, we investigate the long-run relationship between output, labor productivity and real wages in the case of organized manufacturing. We adjust the measure of labor productivity incorporating bottlenecks, such as lack of infrastructure, access to external finance, and labor regulations, which all may influence labor market outcomes. Using panel data from seventeen manufacturing industries, we establish long-run dynamics for the output-labor productivity-real wages series over a period of nearly three decades. We employ recently developed panel unit root and cointegration tests for cross-sectional dependence to incorporate heterogeneity across industries. Long-run elasticities are generally found to be low for labor productivity compared to real wages due to the changes in manufacturing output. There are variations across industries within the manufacturing sector for the effects of the labor market on manufacturing output. In some industries, lower wages are associated with higher output, and the reason for the positive relationship in other industries could be due to workers' bargaining power.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.