9 resultados para computational costs

em Universidad Politécnica de Madrid


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose an analysis for detecting procedures and goals that are deterministic (i.e. that produce at most one solution), or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic (because they cali other predicates that can produce more than one solution). Applications of such determinacy information include detecting programming errors, performing certain high-level program transformations for improving search efñciency, optimizing low level code generation and parallel execution, and estimating tighter upper bounds on the computational costs of goals and data sizes, which can be used for program debugging, resource consumption and granularity control, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efncient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We provide a method whereby, given mode and (upper approximation) type information, we can detect procedures and goals that can be guaranteed to not fail (i.e., to produce at least one solution or not termínate). The technique is based on an intuitively very simple notion, that of a (set of) tests "covering" the type of a set of variables. We show that the problem of determining a covering is undecidable in general, and give decidability and complexity results for the Herbrand and linear arithmetic constraint systems. We give sound algorithms for determining covering that are precise and efiicient in practice. Based on this information, we show how to identify goals and procedures that can be guaranteed to not fail at runtime. Applications of such non-failure information include programming error detection, program transiormations and parallel execution optimization, avoiding speculative parallelism and estimating lower bounds on the computational costs of goals, which can be used for granularity control. Finally, we report on an implementation of our method and show that better results are obtained than with previously proposed approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El estudio desarrollado en este trabajo de tesis se centra en la modelización numérica de la fase de propagación de los deslizamientos rápidos de ladera a través del método sin malla Smoothed Particle Hydrodynamics (SPH). Este método tiene la gran ventaja de permitir el análisis de problemas de grandes deformaciones evitando operaciones costosas de remallado como en el caso de métodos numéricos con mallas tal como el método de los Elementos Finitos. En esta tesis, particular atención viene dada al rol que la reología y la presión de poros desempeñan durante estos eventos. El modelo matemático utilizado se basa en la formulación de Biot-Zienkiewicz v - pw, que representa el comportamiento, expresado en términos de velocidad del esqueleto sólido y presiones de poros, de la mezcla de partículas sólidas en un medio saturado. Las ecuaciones que gobiernan el problema son: • la ecuación de balance de masa de la fase del fluido intersticial, • la ecuación de balance de momento de la fase del fluido intersticial y de la mezcla, • la ecuación constitutiva y • una ecuación cinemática. Debido a sus propiedades geométricas, los deslizamientos de ladera se caracterizan por tener una profundidad muy pequeña frente a su longitud y a su anchura, y, consecuentemente, el modelo matemático mencionado anteriormente se puede simplificar integrando en profundidad las ecuaciones, pasando de un modelo 3D a 2D, el cual presenta una combinación excelente de precisión, sencillez y costes computacionales. El modelo propuesto en este trabajo se diferencia de los modelos integrados en profundidad existentes por incorporar un ulterior modelo capaz de proveer información sobre la presión del fluido intersticial a cada paso computacional de la propagación del deslizamiento. En una manera muy eficaz, la evolución de los perfiles de la presión de poros está numéricamente resuelta a través de un esquema explicito de Diferencias Finitas a cada nodo SPH. Este nuevo enfoque es capaz de tener en cuenta la variación de presión de poros debida a cambios de altura, de consolidación vertical o de cambios en las tensiones totales. Con respecto al comportamiento constitutivo, uno de los problemas principales al modelizar numéricamente deslizamientos rápidos de ladera está en la dificultad de simular con la misma ley constitutiva o reológica la transición de la fase de iniciación, donde el material se comporta como un sólido, a la fase de propagación donde el material se comporta como un fluido. En este trabajo de tesis, se propone un nuevo modelo reológico basado en el modelo viscoplástico de Perzyna, pensando a la viscoplasticidad como a la llave para poder simular tanto la fase de iniciación como la de propagación con el mismo modelo constitutivo. Con el fin de validar el modelo matemático y numérico se reproducen tanto ejemplos de referencia con solución analítica como experimentos de laboratorio. Finalmente, el modelo se aplica a casos reales, con especial atención al caso del deslizamiento de 1966 en Aberfan, mostrando como los resultados obtenidos simulan con éxito estos tipos de riesgos naturales. The study developed in this thesis focuses on the modelling of landslides propagation with the Smoothed Particle Hydrodynamics (SPH) meshless method which has the great advantage of allowing to deal with large deformation problems by avoiding expensive remeshing operations as happens for mesh methods such as, for example, the Finite Element Method. In this thesis, special attention is given to the role played by rheology and pore water pressure during these natural hazards. The mathematical framework used is based on the v - pw Biot-Zienkiewicz formulation, which represents the behaviour, formulated in terms of soil skeleton velocity and pore water pressure, of the mixture of solid particles and pore water in a saturated media. The governing equations are: • the mass balance equation for the pore water phase, • the momentum balance equation for the pore water phase and the mixture, • the constitutive equation and • a kinematic equation. Landslides, due to their shape and geometrical properties, have small depths in comparison with their length or width, therefore, the mathematical model aforementioned can then be simplified by depth integrating the equations, switching from a 3D to a 2D model, which presents an excellent combination of accuracy, computational costs and simplicity. The proposed model differs from previous depth integrated models by including a sub-model able to provide information on pore water pressure profiles at each computational step of the landslide's propagation. In an effective way, the evolution of the pore water pressure profiles is numerically solved through a set of 1D Finite Differences explicit scheme at each SPH node. This new approach is able to take into account the variation of the pore water pressure due to changes of height, vertical consolidation or changes of total stress. Concerning the constitutive behaviour, one of the main issues when modelling fast landslides is the difficulty to simulate with the same constitutive or rheological model the transition from the triggering phase, where the landslide behaves like a solid, to the propagation phase, where the landslide behaves in a fluid-like manner. In this work thesis, a new rheological model is proposed, based on the Perzyna viscoplastic model, thinking of viscoplasticity as the key to close the gap between the triggering and the propagation phase. In order to validate the mathematical model and the numerical approach, benchmarks and laboratory experiments are reproduced and compared to analytical solutions when possible. Finally, applications to real cases are studied, with particular attention paid to the Aberfan flowslide of 1966, showing how the mathematical model accurately and successfully simulate these kind of natural hazards.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of the Electro-Mechanical Impedance (EMI) method for damage detection in Structural Health Monitoring has noticeable increased in recent years. EMI method utilizes piezoelectric transducers for directly measuring the mechanical properties of the host structure, obtaining the so called impedance measurement, highly influenced by the variations of dynamic parameters of the structure. These measurements usually contain a large number of frequency points, as well as a high number of dimensions, since each frequency range swept can be considered as an independent variable. That makes this kind of data hard to handle, increasing the computational costs and being substantially time-consuming. In that sense, the Principal Component Analysis (PCA)-based data compression has been employed in this work, in order to enhance the analysis capability of the raw data. Furthermore, a Support Vector Machine (SVM), which has been widespread used in machine learning and pattern recognition fields, has been applied in this study in order to model any possible existing pattern in the PCAcompress data, using for that just the first two Principal Components. Different known non-damaged and damaged measurements of an experimental tested beam were used as training input data for the SVM algorithm, using as test input data the same amount of cases measured in beams with unknown structural health conditions. Thus, the purpose of this work is to demonstrate how, with a few impedance measurements of a beam as raw data, its healthy status can be determined based on pattern recognition procedures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quizás el Código Morse, inventado en 1838 para su uso en la telegrafía, es uno de los primeros ejemplos de la utilización práctica de la compresión de datos [1], donde las letras más comunes del alfabeto son codificadas con códigos más cortos que las demás. A partir de 1940 y tras el desarrollo de la teoría de la información y la creación de los primeros ordenadores, la compresión de la información ha sido un reto constante y fundamental entre los campos de trabajo de investigadores de todo tipo. Cuanto mayor es nuestra comprensión sobre el significado de la información, mayor es nuestro éxito comprimiéndola. En el caso de la información multimedia, su naturaleza permite la compresión con pérdidas, alcanzando así cotas de compresión imposibles para los algoritmos sin pérdidas. Estos “recientes” algoritmos con pérdidas han estado mayoritariamente basados en transformación de la información al dominio de la frecuencia y en la eliminación de parte de la información en dicho dominio. Transformar al dominio de la frecuencia posee ventajas pero también involucra unos costes computacionales inevitables. Esta tesis presenta un nuevo algoritmo de compresión multimedia llamado “LHE” (Logarithmical Hopping Encoding) que no requiere transformación al dominio de la frecuencia, sino que trabaja en el dominio del espacio. Esto lo convierte en un algoritmo lineal de reducida complejidad computacional. Los resultados del algoritmo son prometedores, superando al estándar JPEG en calidad y velocidad. Para ello el algoritmo utiliza como base la respuesta fisiológica del ojo humano ante el estímulo luminoso. El ojo, al igual que el resto de los sentidos, responde al logaritmo de la señal de acuerdo a la ley de Weber. El algoritmo se compone de varias etapas. Una de ellas es la medición de la “Relevancia Perceptual”, una nueva métrica que nos va a permitir medir la relevancia que tiene la información en la mente del sujeto y en base a la misma, degradar en mayor o menor medida su contenido, a través de lo que he llamado “sub-muestreado elástico”. La etapa de sub-muestreado elástico constituye una nueva técnica sin precedentes en el tratamiento digital de imágenes. Permite tomar más o menos muestras en diferentes áreas de una imagen en función de su relevancia perceptual. En esta tesis se dan los primeros pasos para la elaboración de lo que puede llegar a ser un nuevo formato estándar de compresión multimedia (imagen, video y audio) libre de patentes y de alto rendimiento tanto en velocidad como en calidad. ABSTRACT The Morse code, invented in 1838 for use in telegraphy, is one of the first examples of the practical use of data compression [1], where the most common letters of the alphabet are coded shorter than the rest of codes. From 1940 and after the development of the theory of information and the creation of the first computers, compression of information has been a constant and fundamental challenge among any type of researchers. The greater our understanding of the meaning of information, the greater our success at compressing. In the case of multimedia information, its nature allows lossy compression, reaching impossible compression rates compared with lossless algorithms. These "recent" lossy algorithms have been mainly based on information transformation to frequency domain and elimination of some of the information in that domain. Transforming the frequency domain has advantages but also involves inevitable computational costs. This thesis introduces a new multimedia compression algorithm called "LHE" (logarithmical Hopping Encoding) that does not require transformation to frequency domain, but works in the space domain. This feature makes LHE a linear algorithm of reduced computational complexity. The results of the algorithm are promising, outperforming the JPEG standard in quality and speed. The basis of the algorithm is the physiological response of the human eye to the light stimulus. The eye, like other senses, responds to the logarithm of the signal according with Weber law. The algorithm consists of several stages. One is the measurement of "perceptual relevance," a new metric that will allow us to measure the relevance of information in the subject's mind and based on it; degrade accordingly their contents, through what I have called "elastic downsampling". Elastic downsampling stage is an unprecedented new technique in digital image processing. It lets take more or less samples in different areas of an image based on their perceptual relevance. This thesis introduces the first steps for the development of what may become a new standard multimedia compression format (image, video and audio) free of patents and high performance in both speed and quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La región cerca de la pared de flujos turbulentos de pared ya está bien conocido debido a su bajo número de Reynolds local y la separación escala estrecha. La región lejos de la pared (capa externa) no es tan interesante tampoco, ya que las estadísticas allí se escalan bien por las unidades exteriores. La región intermedia (capa logarítmica), sin embargo, ha estado recibiendo cada vez más atención debido a su propiedad auto-similares. Además, de acuerdo a Flores et al. (2007) y Flores & Jiménez (2010), la capa logarítmica es más o menos independiente de otras capas, lo que implica que podría ser inspeccionado mediante el aislamiento de otras dos capas, lo que reduciría significativamente los costes computacionales para la simulación de flujos turbulentos de pared. Algunos intentos se trataron después por Mizuno & Jiménez (2013), quien simulan la capa logarítmica sin la región cerca de la pared con estadísticas obtenidas de acuerdo razonablemente bien con los de las simulaciones completas. Lo que más, la capa logarítmica podría ser imitado por otra turbulencia sencillo de cizallamiento de motor. Por ejemplo, Pumir (1996) encontró que la turbulencia de cizallamiento homogéneo estadísticamente estacionario (SS-HST) también irrumpe, de una manera muy similar al proceso de auto-sostenible en flujos turbulentos de pared. Según los consideraciones arriba, esta tesis trata de desvelar en qué medida es la capa logarítmica de canales similares a la turbulencia de cizalla más sencillo, SS-HST, mediante la comparación de ambos cinemática y la dinámica de las estructuras coherentes en los dos flujos. Resultados sobre el canal se muestran mediante Lozano-Durán et al. (2012) y Lozano-Durán & Jiménez (2014b). La hoja de ruta de esta tarea se divide en tres etapas. En primer lugar, SS-HST es investigada por medio de un código nuevo de simulación numérica directa, espectral en las dos direcciones horizontales y compacto-diferencias finitas en la dirección de la cizalla. Sin utiliza remallado para imponer la condición de borde cortante periódica. La influencia de la geometría de la caja computacional se explora. Ya que el HST no tiene ninguna longitud característica externa y tiende a llenar el dominio computacional, las simulaciopnes a largo plazo del HST son ’mínimos’ en el sentido de que contiene sólo unas pocas estructuras media a gran escala. Se ha encontrado que el límite principal es el ancho de la caja de la envergadura, Lz, que establece las escalas de longitud y velocidad de la turbulencia, y que las otras dos dimensiones de la caja debe ser suficientemente grande (Lx > 2LZ, Ly > Lz) para evitar que otras direcciones estando limitado también. También se encontró que las cajas de gran longitud, Lx > 2Ly, par con el paso del tiempo la condición de borde cortante periódica, y desarrollar fuertes ráfagas linealizadas no físicos. Dentro de estos límites, el flujo muestra similitudes y diferencias interesantes con otros flujos de cizalla, y, en particular, con la capa logarítmica de flujos turbulentos de pared. Ellos son exploradas con cierto detalle. Incluyen un proceso autosostenido de rayas a gran escala y con una explosión cuasi-periódica. La escala de tiempo de ruptura es de aproximadamente universales, ~20S~l(S es la velocidad de cizallamiento media), y la disponibilidad de dos sistemas de ruptura diferentes permite el crecimiento de las ráfagas a estar relacionado con algo de confianza a la cizalladura de turbulencia inicialmente isotrópico. Se concluye que la SS-HST, llevado a cabo dentro de los parámetros de cílculo apropiados, es un sistema muy prometedor para estudiar la turbulencia de cizallamiento en general. En segundo lugar, las mismas estructuras coherentes como en los canales estudiados por Lozano-Durán et al. (2012), es decir, grupos de vórticidad (fuerte disipación) y Qs (fuerte tensión de Reynolds tangencial, -uv) tridimensionales, se estudia mediante simulación numérica directa de SS-HST con relaciones de aspecto de cuadro aceptables y número de Reynolds hasta Rex ~ 250 (basado en Taylor-microescala). Se discute la influencia de la intermitencia de umbral independiente del tiempo. Estas estructuras tienen alargamientos similares en la dirección sentido de la corriente a las familias separadas en los canales hasta que son de tamaño comparable a la caja. Sus dimensiones fractales, longitudes interior y exterior como una función del volumen concuerdan bien con sus homólogos de canales. El estudio sobre sus organizaciones espaciales encontró que Qs del mismo tipo están alineados aproximadamente en la dirección del vector de velocidad en el cuadrante al que pertenecen, mientras Qs de diferentes tipos están restringidos por el hecho de que no debe haber ningún choque de velocidad, lo que hace Q2s (eyecciones, u < 0,v > 0) y Q4s (sweeps, u > 0,v < 0) emparejado en la dirección de la envergadura. Esto se verifica mediante la inspección de estructuras de velocidad, otros cuadrantes como la uw y vw en SS-HST y las familias separadas en el canal. La alineación sentido de la corriente de Qs ligada a la pared con el mismo tipo en los canales se debe a la modulación de la pared. El campo de flujo medio condicionado a pares Q2-Q4 encontró que los grupos de vórticidad están en el medio de los dos, pero prefieren los dos cizalla capas alojamiento en la parte superior e inferior de Q2s y Q4s respectivamente, lo que hace que la vorticidad envergadura dentro de las grupos de vórticidad hace no cancele. La pared amplifica la diferencia entre los tamaños de baja- y alta-velocidad rayas asociados con parejas de Q2-Q4 se adjuntan como los pares alcanzan cerca de la pared, el cual es verificado por la correlación de la velocidad del sentido de la corriente condicionado a Q2s adjuntos y Q4s con diferentes alturas. Grupos de vórticidad en SS-HST asociados con Q2s o Q4s también están flanqueadas por un contador de rotación de los vórtices sentido de la corriente en la dirección de la envergadura como en el canal. La larga ’despertar’ cónica se origina a partir de los altos grupos de vórticidad ligada a la pared han encontrado los del Álamo et al. (2006) y Flores et al. (2007), que desaparece en SS-HST, sólo es cierto para altos grupos de vórticidad ligada a la pared asociados con Q2s pero no para aquellos asociados con Q4s, cuyo campo de flujo promedio es en realidad muy similar a la de SS-HST. En tercer lugar, las evoluciones temporales de Qs y grupos de vórticidad se estudian mediante el uso de la método inventado por Lozano-Durán & Jiménez (2014b). Las estructuras se clasifican en las ramas, que se organizan más en los gráficos. Ambas resoluciones espaciales y temporales se eligen para ser capaz de capturar el longitud y el tiempo de Kolmogorov puntual más probable en el momento más extrema. Debido al efecto caja mínima, sólo hay un gráfico principal consiste en casi todas las ramas, con su volumen y el número de estructuras instantáneo seguien la energía cinética y enstrofía intermitente. La vida de las ramas, lo que tiene más sentido para las ramas primarias, pierde su significado en el SS-HST debido a las aportaciones de ramas primarias al total de Reynolds estrés o enstrofía son casi insignificantes. Esto también es cierto en la capa exterior de los canales. En cambio, la vida de los gráficos en los canales se compara con el tiempo de ruptura en SS-HST. Grupos de vórticidad están asociados con casi el mismo cuadrante en términos de sus velocidades medias durante su tiempo de vida, especialmente para los relacionados con las eyecciones y sweeps. Al igual que en los canales, las eyecciones de SS-HST se mueven hacia arriba con una velocidad promedio vertical uT (velocidad de fricción) mientras que lo contrario es cierto para los barridos. Grupos de vórticidad, por otra parte, son casi inmóvil en la dirección vertical. En la dirección de sentido de la corriente, que están advección por la velocidad media local y por lo tanto deforman por la diferencia de velocidad media. Sweeps y eyecciones se mueven más rápido y más lento que la velocidad media, respectivamente, tanto por 1.5uT. Grupos de vórticidad se mueven con la misma velocidad que la velocidad media. Se verifica que las estructuras incoherentes cerca de la pared se debe a la pared en vez de pequeño tamaño. Los resultados sugieren fuertemente que las estructuras coherentes en canales no son especialmente asociado con la pared, o incluso con un perfil de cizalladura dado. ABSTRACT Since the wall-bounded turbulence was first recognized more than one century ago, its near wall region (buffer layer) has been studied extensively and becomes relatively well understood due to the low local Reynolds number and narrow scale separation. The region just above the buffer layer, i.e., the logarithmic layer, is receiving increasingly more attention nowadays due to its self-similar property. Flores et al. (20076) and Flores & Jim´enez (2010) show that the statistics of logarithmic layer is kind of independent of other layers, implying that it might be possible to study it separately, which would reduce significantly the computational costs for simulations of the logarithmic layer. Some attempts were tried later by Mizuno & Jimenez (2013), who simulated the logarithmic layer without the buffer layer with obtained statistics agree reasonably well with those of full simulations. Besides, the logarithmic layer might be mimicked by other simpler sheardriven turbulence. For example, Pumir (1996) found that the statistically-stationary homogeneous shear turbulence (SS-HST) also bursts, in a manner strikingly similar to the self-sustaining process in wall-bounded turbulence. Based on these considerations, this thesis tries to reveal to what extent is the logarithmic layer of channels similar to the simplest shear-driven turbulence, SS-HST, by comparing both kinematics and dynamics of coherent structures in the two flows. Results about the channel are shown by Lozano-Dur´an et al. (2012) and Lozano-Dur´an & Jim´enez (20146). The roadmap of this task is divided into three stages. First, SS-HST is investigated by means of a new direct numerical simulation code, spectral in the two horizontal directions and compact-finite-differences in the direction of the shear. No remeshing is used to impose the shear-periodic boundary condition. The influence of the geometry of the computational box is explored. Since HST has no characteristic outer length scale and tends to fill the computational domain, longterm simulations of HST are ‘minimal’ in the sense of containing on average only a few large-scale structures. It is found that the main limit is the spanwise box width, Lz, which sets the length and velocity scales of the turbulence, and that the two other box dimensions should be sufficiently large (Lx > 2LZ, Ly > Lz) to prevent other directions to be constrained as well. It is also found that very long boxes, Lx > 2Ly, couple with the passing period of the shear-periodic boundary condition, and develop strong unphysical linearized bursts. Within those limits, the flow shows interesting similarities and differences with other shear flows, and in particular with the logarithmic layer of wallbounded turbulence. They are explored in some detail. They include a self-sustaining process for large-scale streaks and quasi-periodic bursting. The bursting time scale is approximately universal, ~ 20S~l (S is the mean shear rate), and the availability of two different bursting systems allows the growth of the bursts to be related with some confidence to the shearing of initially isotropic turbulence. It is concluded that SS-HST, conducted within the proper computational parameters, is a very promising system to study shear turbulence in general. Second, the same coherent structures as in channels studied by Lozano-Dur´an et al. (2012), namely three-dimensional vortex clusters (strong dissipation) and Qs (strong tangential Reynolds stress, -uv), are studied by direct numerical simulation of SS-HST with acceptable box aspect ratios and Reynolds number up to Rex ~ 250 (based on Taylor-microscale). The influence of the intermittency to time-independent threshold is discussed. These structures have similar elongations in the streamwise direction to detached families in channels until they are of comparable size to the box. Their fractal dimensions, inner and outer lengths as a function of volume agree well with their counterparts in channels. The study about their spatial organizations found that Qs of the same type are aligned roughly in the direction of the velocity vector in the quadrant they belong to, while Qs of different types are restricted by the fact that there should be no velocity clash, which makes Q2s (ejections, u < 0, v > 0) and Q4s (sweeps, u > 0, v < 0) paired in the spanwise direction. This is verified by inspecting velocity structures, other quadrants such as u-w and v-w in SS-HST and also detached families in the channel. The streamwise alignment of attached Qs with the same type in channels is due to the modulation of the wall. The average flow field conditioned to Q2-Q4 pairs found that vortex clusters are in the middle of the pair, but prefer to the two shear layers lodging at the top and bottom of Q2s and Q4s respectively, which makes the spanwise vorticity inside vortex clusters does not cancel. The wall amplifies the difference between the sizes of low- and high-speed streaks associated with attached Q2-Q4 pairs as the pairs reach closer to the wall, which is verified by the correlation of streamwise velocity conditioned to attached Q2s and Q4s with different heights. Vortex clusters in SS-HST associated with Q2s or Q4s are also flanked by a counter rotating streamwise vortices in the spanwise direction as in the channel. The long conical ‘wake’ originates from tall attached vortex clusters found by del A´ lamo et al. (2006) and Flores et al. (2007b), which disappears in SS-HST, is only true for tall attached vortices associated with Q2s but not for those associated with Q4s, whose averaged flow field is actually quite similar to that in SS-HST. Third, the temporal evolutions of Qs and vortex clusters are studied by using the method invented by Lozano-Dur´an & Jim´enez (2014b). Structures are sorted into branches, which are further organized into graphs. Both spatial and temporal resolutions are chosen to be able to capture the most probable pointwise Kolmogorov length and time at the most extreme moment. Due to the minimal box effect, there is only one main graph consist by almost all the branches, with its instantaneous volume and number of structures follow the intermittent kinetic energy and enstrophy. The lifetime of branches, which makes more sense for primary branches, loses its meaning in SS-HST because the contributions of primary branches to total Reynolds stress or enstrophy are almost negligible. This is also true in the outer layer of channels. Instead, the lifetime of graphs in channels are compared with the bursting time in SS-HST. Vortex clusters are associated with almost the same quadrant in terms of their mean velocities during their life time, especially for those related with ejections and sweeps. As in channels, ejections in SS-HST move upwards with an average vertical velocity uτ (friction velocity) while the opposite is true for sweeps. Vortex clusters, on the other hand, are almost still in the vertical direction. In the streamwise direction, they are advected by the local mean velocity and thus deformed by the mean velocity difference. Sweeps and ejections move faster and slower than the mean velocity respectively, both by 1.5uτ . Vortex clusters move with the same speed as the mean velocity. It is verified that the incoherent structures near the wall is due to the wall instead of small size. The results suggest that coherent structures in channels are not particularly associated with the wall, or even with a given shear profile.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In pre-surgery decisions in hospital emergency cases, fast and reliable results of the solid and fluid mechanics problems are of great interest to clinicians. In the current investigation, an iterative process based on a pressure-type boundary condition is proposed in order to reduce the computational costs of blood flow simulations in arteries, without losing control of the important clinical parameters. The incorporation of cardiovascular autoregulation, together with the well-known impedance boundary condition, forms the basis of the proposed methodology. With autoregulation, the instabilities associated with conventional pressure-type or impedance boundary conditions are avoided without an excessive increase in computational costs. The general behaviour of pulsatile blood flow in arteries, which is important from the clinical point of view, is well reproduced through this new methodology. In addition, the interaction between the blood and the arterial walls occurs via a modified weak coupling, which makes the simulation more stable and computationally efficient. Based on in vitro experiments, the hyperelastic behaviour of the wall is characterised and modelled. The applications and benefits of the proposed pressure-type boundary condition are shown in a model of an idealised aortic arch with and without an ascending aorta dissection, which is a common cardiovascular disorder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Services in smart environments pursue to increase the quality of people?s lives. The most important issues when developing this kind of environments is testing and validating such services. These tasks usually imply high costs and annoying or unfeasible real-world testing. In such cases, artificial societies may be used to simulate the smart environment (i.e. physical environment, equipment and humans). With this aim, the CHROMUBE methodology guides test engineers when modeling human beings. Such models reproduce behaviors which are highly similar to the real ones. Originally, these models are based on automata whose transitions are governed by random variables. Automaton?s structure and the probability distribution functions of each random variable are determined by a manual test and error process. In this paper, it is presented an alternative extension of this methodology which avoids the said manual process. It is based on learning human behavior patterns automatically from sensor data by using machine learning techniques. The presented approach has been tested on a real scenario, where this extension has given highly accurate human behavior models,