13 resultados para conservativeness.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partially grouted masonry walls subjected to in-plane shear exhibit a complex behaviour because of the influence of the aspect ratio, the pre-compression, the grouting pattern, the ratios of the horizontal and the vertical reinforcements, the boundary conditions and the characteristics of the constituent materials. The existing in-plane shear expressions for the partially grouted masonry are formulated as sum of strength of three parameters, namely, the masonry, the reinforcement and the axial force. The parameter ‘masonry’ includes the wall aspect ratio and the masonry compressive strength; the aspect ratio of the unreinforced panel inscribed into the grouted cores and bond beams are not considered, although failure is often dominated by these unreinforced masonry panels. This paper describes the dominance of these panels, particularly those that are squat, to the shear capacity of whole of shear walls. Further, the current design formulae are shown highly un-conservative by many researchers; this paper provides a potential reason for this un-conservativeness. It is shown that by including an additional term of the unreinforced panel aspect ratio a rational design formula could be established. This new expression is validated with independent test results reported in the literature – both Australian and overseas; the predictions are shown to be conservative.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A numerical model for shallow-water equations has been built and tested on the Yin-Yang overset spherical grid. A high-order multimoment finite-volume method is used for the spatial discretization in which two kinds of so-called moments of the physical field [i.e., the volume integrated average ( VIA) and the point value (PV)] are treated as the model variables and updated separately in time. In the present model, the PV is computed by the semi-implicit semi-Lagrangian formulation, whereas the VIA is predicted in time via a flux-based finite-volume method and is numerically conserved on each component grid. The concept of including an extra moment (i.e., the volume-integrated value) to enforce the numerical conservativeness provides a general methodology and applies to the existing semi-implicit semi-Lagrangian formulations. Based on both VIA and PV, the high-order interpolation reconstruction can only be done over a single grid cell, which then minimizes the overlapping zone between the Yin and Yang components and effectively reduces the numerical errors introduced in the interpolation required to communicate the data between the two components. The present model completely gets around the singularity and grid convergence in the polar regions of the conventional longitude-latitude grid. Being an issue demanding further investigation, the high-order interpolation across the overlapping region of the Yin-Yang grid in the current model does not rigorously guarantee the numerical conservativeness. Nevertheless, these numerical tests show that the global conservation error in the present model is negligibly small. The model has competitive accuracy and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new high-order finite volume method based on local reconstruction is presented in this paper. The method, so-called the multi-moment constrained finite volume (MCV) method, uses the point values defined within single cell at equally spaced points as the model variables (or unknowns). The time evolution equations used to update the unknowns are derived from a set of constraint conditions imposed on multi kinds of moments, i.e. the cell-averaged value and the point-wise value of the state variable and its derivatives. The finite volume constraint on the cell-average guarantees the numerical conservativeness of the method. Most constraint conditions are imposed on the cell boundaries, where the numerical flux and its derivatives are solved as general Riemann problems. A multi-moment constrained Lagrange interpolation reconstruction for the demanded order of accuracy is constructed over single cell and converts the evolution equations of the moments to those of the unknowns. The presented method provides a general framework to construct efficient schemes of high orders. The basic formulations for hyperbolic conservation laws in 1- and 2D structured grids are detailed with the numerical results of widely used benchmark tests. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract—Nowadays, classical washout filters are extensively used in commercial motion simulators. Even though there are several advantages for classical washout filters, such as short processing time, simplicity and ease of adjustment, they have several shortcomings. The main disadvantage is the fixed scheme and parameters of the classical washout filter cause inflexibility of the structure and thus the resulting simulator fails to suit all circumstances. Moreover, it is a conservative approach and the platform cannot be fully exploited. The aim of this research is to present a fuzzy logic approach and take the human perception error into account in the classical motion cueing algorithm, in order to improve both the physical limits of restitution and realistic human sensations. The fuzzy compensator signal is applied to adjust the filtered signals on the longitudinal and rotational channels online, as well as the tilt coordination to minimize the vestibular sensation error below the human perception threshold. The results indicate that the proposed fuzzy logic controllers significantly minimize the drawbacks of having fixed parameters and conservativeness in the classical washout filter. In addition, the performance of motion cueing algorithm and human perception for most occasions is improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O estágio da pesquisa ora em andamento tem como foco principal apresentar e discutir dois conjuntos de fontes utilizados como auxiliares no estudo, em particular, da trajetória do pensador católico brasileiro Alceu Amoroso Lima, e também dos intelectuais que faziam parte do seu grupo. Os conjuntos são: uma seleção de obras de pensadores dos séculos XVIII-XIX (Edmund Burke, Louis-Ambroise De Bonald, Joseph De Maistre e Juan Donoso Cortés); a revista A Ordem, publicação de orientação católica criada nos anos 20 e que era o principal meio de expressão da intelectualidade católica laica. Certamente, o primeiro conjunto de fontes nos fornece elementos para pensar a produção de artigos de intelectuais católicos publicados na revista A Ordem, no Brasil dos anos 1930 e 1940, e particularmente por Alceu Amoroso Lima, o intelectual líder do grupo pessoas que escreviam nessas páginas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cultural inheritance can be considered as a mechanism of adaptation made possible by communication, which has reached its greatest development in humans and can allow long-term conservation or rapid change of culturally transmissible traits depending on circumstances and needs. Conservativeness/flexibility is largely modulated by mechanisms of sociocultural transmission. An analysis was carried out by testing the fit of three models to 47 cultural traits (classified in six groups) in 277 African societies. Model A (demic diffusion) is conservation over generations, as shown by correlations of cultural traits with language, used as a measure of historical connection. Model B (environmental adaptation) is measured by correlation to the natural environment. Model C (cultural diffusion) is the spread to neighbors by social contact in an epidemic-like fashion and was tested by measuring the tightness of geographic clustering of the traits. Most traits examined, in particular those affecting family structure and kinship, showed great conservation over generations, as shown by the fit of model A. They are most probably transmitted by family members. This is in agreement with the theoretical demonstration that cultural transmission in the family (vertical) is the most conservative one. Some traits show environmental effects, indicating the importance of adaptation to physical environment. Only a few of the 47 traits showed tight geographic clustering indicating that their spread to nearest neighbors follows model C, as is usually the case for transmission among unrelated people (called horizontal transmission).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O sucesso de estratégias de controle preditivo baseado em modelo (MPC, na sigla em inglês) tanto em ambiente industrial quanto acadêmico tem sido marcante. No entanto, ainda há diversas questões em aberto na área, especialmente quando a hipótese simplificadora de modelo perfeito é abandonada. A consideração explícita de incertezas levou a importantes progressos na área de controle robusto, mas esta ainda apresenta alguns problemas: a alta demanda computacional e o excesso de conservadorismo são questões que podem ter prejudicado a aplicação de estratégias de controle robusto na prática. A abordagem de controle preditivo estocástico (SMPC, na sigla em inglês) busca a redução do conservadorismo através da incorporação de informação estatística dos ruídos. Como processos na indústria química sempre estão sujeito a distúrbios, seja devido a diferenças entre planta e modelo ou a distúrbios não medidos, está técnica surge como uma interessante alternativa para o futuro. O principal objetivo desta tese é o desenvolvimento de algoritmos de SMPC que levem em conta algumas das especificidades de tais processos, as quais não foram adequadamente tratadas na literatura até o presente. A contribuição mais importante é a inclusão de ação integral no controlador através de uma descrição do modelo em termos de velocidade. Além disso, restrições obrigatórias (hard) nas entradas associadas a limites físicos ou de segurança e restrições probabilísticas nos estados normalmente advindas de especificações de produtos também são consideradas na formulação. Duas abordagens foram seguidas neste trabalho, a primeira é mais direta enquanto a segunda fornece garantias de estabilidade em malha fechada, contudo aumenta o conservadorismo. Outro ponto interessante desenvolvido nesta tese é o controle por zonas de sistemas sujeitos a distúrbios. Essa forma de controle é comum na indústria devido à falta de graus de liberdade, sendo a abordagem proposta a primeira contribuição da literatura a unir controle por zonas e SMPC. Diversas simulações de todos os controladores propostos e comparações com modelos da literatura são exibidas para demonstrar o potencial de aplicação das técnicas desenvolvidas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2015 The Institution of Engineering and Technology. In this study, the authors derive some new refined Jensen-based inequalities, which encompass both the Jensen inequality and its most recent improvement based on the Wirtinger integral inequality. The potential capability of this approach is demonstrated through applications to stability analysis of time-delay systems. More precisely, by using the newly derived inequalities, they establish new stability criteria for two classes of time-delay systems, namely discrete and distributed constant delays systems and interval time-varying delay systems. The resulting stability conditions are derived in terms of linear matrix inequalities, which can be efficiently solved by various convex optimisation algorithms. Numerical examples are given to show the effectiveness and least conservativeness of the results obtained in this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, new weighted integral inequalities (WIIs) are first derived based on Jensen's integral inequalities in single and double forms. It is theoretically shown that the newly derived inequalities in this paper encompass both the Jensen inequality and its most recent improvement based on Wirtinger's integral inequality. The potential capability of WIIs is demonstrated through applications to exponential stability analysis of some classes of time-delay systems in the framework of linear matrix inequalities (LMIs). The effectiveness and least conservativeness of the derived stability conditions using WIIs are shown by various numerical examples.