956 resultados para Finite-time Blow


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osteoporosis-related vertebral fractures represent a major health problem in elderly populations. Such fractures can often only be diagnosed after a substantial deformation history of the vertebral body. Therefore, it remains a challenge for clinicians to distinguish between stable and progressive potentially harmful fractures. Accordingly, novel criteria for selection of the appropriate conservative or surgical treatment are urgently needed. Computer tomography-based finite element analysis is an increasingly accepted method to predict the quasi-static vertebral strength and to follow up this small strain property longitudinally in time. A recent development in constitutive modeling allows us to simulate strain localization and densification in trabecular bone under large compressive strains without mesh dependence. The aim of this work was to validate this recently developed constitutive model of trabecular bone for the prediction of strain localization and densification in the human vertebral body subjected to large compressive deformation. A custom-made stepwise loading device mounted in a high resolution peripheral computer tomography system was used to describe the progressive collapse of 13 human vertebrae under axial compression. Continuum finite element analyses of the 13 compression tests were realized and the zones of high volumetric strain were compared with the experiments. A fair qualitative correspondence of the strain localization zone between the experiment and finite element analysis was achieved in 9 out of 13 tests and significant correlations of the volumetric strains were obtained throughout the range of applied axial compression. Interestingly, the stepwise propagating localization zones in trabecular bone converged to the buckling locations in the cortical shell. While the adopted continuum finite element approach still suffers from several limitations, these encouraging preliminary results towardsthe prediction of extended vertebral collapse may help in assessing fracture stability in future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Periacetabular osteotomy (PAO) is an effective approach for surgical treatment of hip dysplasia. The aim of PAO is to increase acetabular coverage of the femoral head and to reduce contact pressures by reorienting the acetabulum fragment after PAO. The success of PAO significantly depends on the surgeon’s experience. Previously, we have developed a computer-assisted planning and navigation system for PAO, which allows for not only quantifying the 3D hip morphology for a computer-assisted diagnosis of hip dysplasia but also a virtual PAO surgical planning and simulation. In this paper, based on this previously developed PAO planning and navigation system, we developed a 3D finite element (FE) model to investigate the optimal acetabulum reorientation after PAO. Our experimental results showed that an optimal position of the acetabulum can be achieved that maximizes contact area and at the same time minimizes peak contact pressure in pelvic and femoral cartilages. In conclusion, our computer-assisted planning and navigation system with FE modeling can be a promising tool to determine the optimal PAO planning strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Periacetabular Osteotomy (PAO) is a joint preserving surgical intervention intended to increase femoral head coverage and thereby to improve stability in young patients with hip dysplasia. Previously, we developed a CT-based, computer-assisted program for PAO diagnosis and planning, which allows for quantifying the 3D acetabular morphology with parameters such as acetabular version, inclination, lateral center edge (LCE) angle and femoral head coverage ratio (CO). In order to verify the hypothesis that our morphology-based planning strategy can improve biomechanical characteristics of dysplastic hips, we developed a 3D finite element model based on patient-specific geometry to predict cartilage contact stress change before and after morphology-based planning. Our experimental results demonstrated that the morphology-based planning strategy could reduce cartilage contact pressures and at the same time increase contact areas. In conclusion, our computer-assisted system is an efficient tool for PAO planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the Feynman–Hellman Theorem (resp. the Ward–Takahashi identity). To show the Ward–Takahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lüscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lüscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A characterization of a property of binary relations is of finite type if it is stated in terms of ordered T-tuples of alternatives for some positive integer T. A characterization of finite type can be used to determine in polynomial time whether a binary relation over a finite set has the property characterized. Unfortunately, Pareto representability in R2 has no characterization of finite type (Knoblauch, 2002). This result is generalized below Rl, l larger than 2. The method of proof is applied to other properties of binary relations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present in this paper a neural-like membrane system solving the SAT problem in linear time. These neural Psystems are nets of cells working with multisets. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (?excitations?) to the neighboring cells. The maximal mode of rules application and the replicative mode of communication between cells are at the core of the eficiency of these systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Corrosion of a reinforcement bar leads to expansive pressure on the surrounding concrete that provokes internal cracking and, eventually, spalling and delamination. Here, an embedded cohesive crack 2D finite element is applied for simulating the cracking process. In addition, four simplified analytical models are introduced for comparative purposes. Under some assumptions about rust properties, corrosion rate, and particularly, the accommodation of oxide products within the open cracks generated in the process, the proposed FE model is able to estimate time to surface cracking quite accurately. Moreover, emerging cracking patterns are in reasonably good agreement with expectations. As a practical case, a prototype application of the model to an actual bridge deck is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper employs a 3D hp self-adaptive grid-refinement finite element strategy for the solution of a particular electromagnetic waveguide structure known as Magic-T. This structure is utilized as a power divider/combiner in communication systems as well as in other applications. It often incorporates dielectrics, metallic screws, round corners, and so on, which may facilitate its construction or improve its design, but significantly difficult its modeling when employing semi-analytical techniques. The hp-adaptive finite element method enables accurate modeling of a Magic-T structure even in the presence of these undesired materials/geometries. Numerical results demonstrate the suitability of the hp-adaptive method for modeling a Magic-T rectangular waveguide structure, delivering errors below 0.5% with a limited number of unknowns. Solutions of waveguide problems delivered by the self-adaptive hp-FEM are comparable to those obtained with semi-analytical techniques such as the Mode Matching method, for problems where the latest methods can be applied. At the same time, the hp-adaptive FEM enables accurate modeling of more complex waveguide structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the use of a highly-accurate three-dimensional (3D) fully automatic hp-adaptive finite element method (FEM) for the characterization of rectangular waveguide discontinuities. These discontinuities are either the unavoidable result of mechanical/electrical transitions or deliberately introduced in order to perform certain electrical functions in modern communication systems. The proposed numerical method combines the geometrical flexibility of finite elements with an accuracy that is often superior to that provided by semi-analytical methods. It supports anisotropic refinements on irregular meshes with hanging nodes, and isoparametric elements. It makes use of hexahedral elements compatible with high-order H(curl)H(curl) discretizations. The 3D hp-adaptive FEM is applied for the first time to solve a wide range of 3D waveguide discontinuity problems of microwave communication systems in which exponential convergence of the error is observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A contactless transformer model is proposed in this paper using Finite Element Analysis (FEA). This model can be used to simulate Inductive Coupling Power Transfer (ICPT) systems with good accuracy of the transformer and reduce the fabrication time of these systems. The model not only takes into account the geometry of the windings but also the frequency effects in them. As the transformer does not have a magnetic core, it is complicated to model because the flux is expanded in the area around the windings. In order to obtain a very accurate model, it is necessary to use a 2D/3D field solver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the evolution of a finite size population formed by mutationally isolated lineages of error-prone replicators in a two-peak fitness landscape. Computer simulations are performed to gain a stochastic description of the system dynamics. More specifically, for different population sizes, we compute the probability of each lineage being selected in terms of their mutation rates and the amplification factors of the fittest phenotypes. We interpret the results as the compromise between the characteristic time a lineage takes to reach its fittest phenotype by crossing the neutral valley and the selective value of the sequences that form the lineages. A main conclusion is drawn: for finite population sizes, the survival probability of the lineage that arrives first to the fittest phenotype rises significantly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En el presente artículo se muestran las ventajas de la programación en paralelo resolviendo numéricamente la ecuación del calor en dos dimensiones a través del método de diferencias finitas explícito centrado en el espacio FTCS. De las conclusiones de este trabajo se pone de manifiesto la importancia de la programación en paralelo para tratar problemas grandes, en los que se requiere un elevado número de cálculos, para los cuales la programación secuencial resulta impracticable por el elevado tiempo de ejecución. En la primera sección se describe brevemente los conceptos básicos de programación en paralelo. Seguidamente se resume el método de diferencias finitas explícito centrado en el espacio FTCS aplicado a la ecuación parabólica del calor. Seguidamente se describe el problema de condiciones de contorno y valores iniciales específico al que se va a aplicar el método de diferencias finitas FTCS, proporcionando pseudocódigos de una implementación secuencial y dos implementaciones en paralelo. Finalmente tras la discusión de los resultados se presentan algunas conclusiones. In this paper the advantages of parallel computing are shown by solving the heat conduction equation in two dimensions with the forward in time central in space (FTCS) finite difference method. Two different levels of parallelization are consider and compared with traditional serial procedures. We show in this work the importance of parallel computing when dealing with large problems that are impractical or impossible to solve them with a serial computing procedure. In the first section a summary of parallel computing approach is presented. Subsequently, the forward in time central in space (FTCS) finite difference method for the heat conduction equation is outline, describing how the heat flow equation is derived in two dimensions and the particularities of the finite difference numerical technique considered. Then, a specific initial boundary value problem is solved by the FTCS finite difference method and serial and parallel pseudo codes are provided. Finally after results are discussed some conclusions are presented.