909 resultados para Slip Complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [Prog. Theor. Phys. 92 (1994), 939] boundary integrals. The analysis has been carried out by studying the convergence of the first- and second-order differential operators as the smoothing length (that is, the characteristic length on which relies the SPH interpolation) decreases. These differential operators are of fundamental importance for the computation of the viscous drag and the viscous/diffusive terms in the momentum and energy equations. It has been proved that close to the boundaries some of the mirroring techniques leads to intrinsic inaccuracies in the convergence of the differential operators. A consistent formulation has been derived starting from Takeda et al. boundary integrals (see the above reference). This original formulation allows implementing no-slip boundary conditions consistently in many practical applications as viscous flows and diffusion problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A clear statement in these lines textually cited (Byers et al., 1938) defines the framework of this special issue: “True soil is the product of the action of climate and living organism upon the parent material, as conditioned by the local relief. The length of time during which these forces are operative is of great importance in determining the character of the ultimate product. Drainage conditions are also important and are controlled by local relief, by the nature of the parent material or underlying rock strata, or by the amount of precipitation in relation to rate of percolation and runoff water. There are, therefore, five principal factors of soil formation: Parent material, climate, biological activity, relief and time. These soil forming factors are interdependent, each modifying the effectiveness of the others.” Due to these various processes associated to its formation and genesis soil dynamics reveals high complexity that creates several levels of structure using this term in a broad sense

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amundsenisen is an ice field, 80 km2 in area, located in Southern Spitsbergen, Svalbard. Radio-echo sounding measurements at 20 MHz show high intensity returns from a nearly flat basal reflector at four zones, all of them with ice thickness larger than 500m. These reflections suggest possible subglacial lakes. To determine whether basal liquid water is compatible with current pressure and temperature conditions, we aim at applying a thermo mechanical model with a free boundary at the bed defined as solution of a Stefan problem for the interface ice-subglaciallake. The complexity of the problem suggests the use of a bi-dimensional model, but this requires that well-defined flowlines across the zones with suspected subglacial lakes are available. We define these flow lines from the solution of a three-dimensional dynamical model, and this is the main goal of the present contribution. We apply a three-dimensional full-Stokes model of glacier dynamics to Amundsenisen icefield. We are mostly interested in the plateau zone of the icefield, so we introduce artificial vertical boundaries at the heads of the main outlet glaciers draining Amundsenisen. At these boundaries we set velocity boundary conditions. Velocities near the centres of the heads of the outlets are known from experimental measurements. The velocities at depth are calculated according to a SIA velocity-depth profile, and those at the rest of the transverse section are computed following Nye’s (1952) model. We select as southeastern boundary of the model domain an ice divide, where we set boundary conditions of zero horizontal velocities and zero vertical shear stresses. The upper boundary is a traction-free boundary. For the basal boundary conditions, on the zones of suspected subglacial lakes we set free-slip boundary conditions, while for the rest of the basal boundary we use a friction law linking the sliding velocity to the basal shear stress,in such a way that, contrary to the shallow ice approximation, the basal shear stress is not equal to the basal driving stress but rather part of the solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The implementation of boundary conditions is one of the points where the SPH methodology still has some work to do. The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [1] boundary integrals. A Pouseuille flow has been used as a example to gradually evaluate the accuracy of the different implementations. Our goal is to test the behavior of the second-order differential operator with the proposed boundary extensions when the smoothing length h and other dicretization parameters as dx/h tend simultaneously to zero. First, using a smoothed continuous approximation of the unidirectional Pouseuille problem, the evolution of the velocity profile has been studied focusing on the values of the velocity and the viscous shear at the boundaries, where the exact solution should be approximated as h decreases. Second, to evaluate the impact of the discretization of the problem, an Eulerian SPH discrete version of the former problem has been implemented and similar results have been monitored. Finally, for the sake of completeness, a 2D Lagrangian SPH implementation of the problem has been also studied to compare the consequences of the particle movement

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to address the methodological process of a teaching strategy for training project management complexity in postgraduate programs. The proposal is made up of different methods —intuitive, comparative, deductive, case study, problem-solving Project-Based Learning— and different activities inside and outside the classroom. This integration of methods motivated the current use of the concept of “learning strategy”. The strategy has two phases: firstly, the integration of the competences —technical, behavioral and contextual—in real projects; and secondly, the learning activity was oriented in upper level of knowledge, the evaluating the complexity for projects management in real situations. Both the competences in the learning strategy and the Project Complexity Evaluation are based on the ICB of IPMA. The learning strategy is applied in an international Postgraduate Program —Erasmus Mundus Master of Science— with the participation of five Universities of the European Union. This master program is fruit of a cooperative experience from one Educative Innovation Group of the UPM -GIE-Project-, two Research Groups of the UPM and the collaboration with other external agents to the university. Some reflections on the experience and the main success factors in the learning strategy were presented in the paper

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to address the methodological process of a teaching strategy for training project management complexity in postgraduate programs. The proposal is made up of different methods —intuitive, comparative, deductive, case study, problem-solving Project-Based Learning— and different activities inside and outside the classroom. This integration of methods motivated the current use of the concept of ―learning strategy‖. The strategy has two phases: firstly, the integration of the competences —technical, behavioral and contextual—in real projects; and secondly, the learning activity was oriented in upper level of knowledge, the evaluating the complexity for projects management in real situations. Both the competences in the learning strategy and the Project Complexity Evaluation are based on the ICB of IPMA. The learning strategy is applied in an international Postgraduate Program —Erasmus Mundus Master of Science— with the participation of five Universities of the European Union. This master program is fruit of a cooperative experience from one Educative Innovation Group of the UPM -GIE-Project-, two Research Groups of the UPM and the collaboration with other external agents to the university. Some reflections on the experience and the main success factors in the learning strategy were presented in the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the computational complexity of the logic underlying the standard OWL 2 for the Web Ontology Language (OWL) appears discouraging for real applications, several contributions have shown that reasoning with OWL ontologies is feasible in practice. It turns out that reasoning in practice is often far less complex than is suggested by the established theoretical complexity bound, which reflects the worstcase scenario. State-of-the reasoners like FACT++, HERMIT, PELLET and RACER have demonstrated that, even with fairly expressive fragments of OWL 2, acceptable performances can be achieved. However, it is still not well understood why reasoning is feasible in practice and it is rather unclear how to study this problem. In this paper, we suggest first steps that in our opinion could lead to a better understanding of practical complexity. We also provide and discuss some initial empirical results with HERMIT on prominent ontologies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of new signal processing methods, such as non-linear analysis techniques, represents a new perspective which adds further value to brain signals' analysis. Particularly, Lempel–Ziv's Complexity (LZC) has proven to be useful in exploring the complexity of the brain electromagnetic activity. However, an important problem is the lack of knowledge about the physiological determinants of these measures. Although acorrelation between complexity and connectivity has been proposed, this hypothesis was never tested in vivo. Thus, the correlation between the microstructure of the anatomic connectivity and the functional complexity of the brain needs to be inspected. In this study we analyzed the correlation between LZC and fractional anisotropy (FA), a scalar quantity derived from diffusion tensors that is particularly useful as an estimate of the functional integrity of myelinated axonal fibers, in a group of sixteen healthy adults (all female, mean age 65.56 ± 6.06 years, intervals 58–82). Our results showed a positive correlation between FA and LZC scores in regions including clusters in the splenium of the corpus callosum, cingulum, parahipocampal regions and the sagittal stratum. This study supports the notion of a positive correlation between the functional complexity of the brain and the microstructure of its anatomical connectivity. Our investigation proved that a combination of neuroanatomical and neurophysiological techniques may shed some light on the underlying physiological determinants of brain's oscillations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetoencephalography (MEG) allows the real-time recording of neural activity and oscillatory activity in distributed neural networks. We applied a non-linear complexity analysis to resting-state neural activity as measured using whole-head MEG. Recordings were obtained from 20 unmedicated patients with major depressive disorder and 19 matched healthy controls. Subsequently, after 6 months of pharmacological treatment with the antidepressant mirtazapine 30 mg/day, patients received a second MEG scan. A measure of the complexity of neural signals, the Lempel–Ziv Complexity (LZC), was derived from the MEG time series. We found that depressed patients showed higher pre-treatment complexity values compared with controls, and that complexity values decreased after 6 months of effective pharmacological treatment, although this effect was statistically significant only in younger patients. The main treatment effect was to recover the tendency observed in controls of a positive correlation between age and complexity values. Importantly, the reduction of complexity with treatment correlated with the degree of clinical symptom remission. We suggest that LZC, a formal measure of neural activity complexity, is sensitive to the dynamic physiological changes observed in depression and may potentially offer an objective marker of depression and its remission after treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective The neurodevelopmental–neurodegenerative debate is a basic issue in the field of the neuropathological basis of schizophrenia (SCH). Neurophysiological techniques have been scarcely involved in such debate, but nonlinear analysis methods may contribute to it. Methods Fifteen patients (age range 23–42 years) matching DSM IV-TR criteria for SCH, and 15 sex- and age-matched control subjects (age range 23–42 years) underwent a resting-state magnetoencephalographic evaluation and Lempel–Ziv complexity (LZC) scores were calculated. Results Regression analyses indicated that LZC values were strongly dependent on age. Complexity scores increased as a function of age in controls, while SCH patients exhibited a progressive reduction of LZC values. A logistic model including LZC scores, age and the interaction of both variables allowed the classification of patients and controls with high sensitivity and specificity. Conclusions Results demonstrated that SCH patients failed to follow the “normal” process of complexity increase as a function of age. In addition, SCH patients exhibited a significant reduction of complexity scores as a function of age, thus paralleling the pattern observed in neurodegenerative diseases. Significance Our results support the notion of a progressive defect in SCH, which does not contradict the existence of a basic neurodevelopmental alteration. Highlights ► Schizophrenic patients show higher complexity values as compared to controls. ► Schizophrenic patients showed a tendency to reduced complexity values as a function of age while controls showed the opposite tendency. ► The tendency observed in schizophrenic patients parallels the tendency observed in Alzheimer disease patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents two test procedures for evaluating the bond stress–slip and the slip–radial dilation relationships when the prestressing force is transmitted by releasing the steel (wire or strand) in precast prestressed elements. The bond stress–slip relationship is obtained with short length specimens, to guarantee uniform bond stress, for three depths of the wire indentation (shallow, medium and deep). An analytical model for bond stress–slip relationship is proposed and compared with the experimental results. The model is also compared with the experimental results of other researchers. Since numerical models for studying bond-splitting problems in prestressed concrete require experimental data about dilatancy angle (radial dilation), a test procedure is proposed to evaluate these parameters. The obtained values of the radial dilation are compared with the prior estimated by numerical modelling and good agreement is reached

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.