669 resultados para Categorization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La Tesis Doctoral que se presenta trata de profundizar en el conocimiento del patrimonio arbóreo, en la cuestión de la evaluación y la singularidad a nivel de ejemplares y agrupaciones. La metodología incorpora nuevas herramientas, modelos y criterios utilizados en la valoración del paisaje y de los recursos naturales. Siendo el árbol un generador de espacio y habitats, solo o en masa, es vínculo entre la naturaleza y el ser humano, las comunidades y sus costumbres. Desde estos parámetros se indagan los procesos que permiten estimar el significado, la importancia y el valor del árbol para llevarlo a una consideración de Singular y/o Monumental. El estudio se basa en los sistemas de catalogación, tras el reconocimiento, localization y selección de ejemplares. Así mismo, se explora la relación sistémica entre árbol y entorno para poner de relieve la importancia del árbol en la configuración de determinados paisajes culturales y ecológicos -como son los robledales de antiguos trasmochos en Euskadi-. Sobre un primer inventario se realiza un estudio pormenorizado de cada árbol registrado y, en un procedimiento paramétrico, se definen criterios -ecológicos y paisajísticos, etnográficos y culturales- de selección de elementos y de evaluación. La obtención de distintos índices de singularidad para los árboles, utilizando modelos tanto cualitativos como cuantitativos, sirve como vía hacia una categorization de los árboles muestreados. A partir de la figura de "Árbol Singular", recogida en la Ley 16/ 1994, de Conservación de la Naturaleza del País Vasco se realiza una revisión del marco legislativo y el régimen de protección, haciendo un análisis a nivel local, autonómico y estatal. Dicho examen pone de manifiesto la diversidad de contextos y significados bajo los que se presentan los árboles. Se muestra también una (in)definición: cierta ambigüedad en torno a la definición que induce a diferentes interpretaciones y nomenclaturas en un intento de delimitar la categoría para regularlo jurídicamente. Estas figuras concebidas desde las políticas de protección ambiental, no siempre resultan del todo efectivas. El Catálogo de Árboles Singulares del País Vasco, creado por Decreto como instrumento para poner en valor estos recursos naturales, no ha sido actualizado desde hace casi veinte años. Sin embargo, se han llevado a cabo iniciativas de ampliación como el trabajo impulsado por el Departamento de Medio Ambiente y Biodiversidad de la Diputación Foral de Álava para el inventario de los árboles singulares del Territorio Histórico de Álava y la propuesta de catálogo a partir del cual se desarrolla esta Tesis Doctoral. Desde estas reflexiones y el desarrollo de modelos para la evaluación y catalogación de los ejemplares registrados, la investigación trata de descifrar cómo observamos a los árboles con los que nos vinculamos, cómo son identificados, a través de qué otros parámetros intangibles les damos valor, y por qué necesitamos clasificarlos. El trabajo concluye con propuestas y acciones alternativas para la conservación y mejora de los árboles que se proponen como singulares, entre ellas, la divulgación y la sensibilización para garantizar el compromiso y la ampliación en el futuro de un catálogo abierto para los árboles de interés. ABSTRACT The PhD thesis here presented tries to deepen the knowledge of tree heritage, the issue of evaluation and singularity where it comes to either specimen or groups. The methodology includes new tools, models and criteria to be used in the assessment of landscape and natural resources. With the tree being a creator of space and habitats, alone or in groups, it is a link between nature and humans, societies and their habits. Using these parameters processes are being sought after: processes that allow us to assess the meaning, the importance and the value of trees in order to lead us to considering a tree as being a 'Singular tree' and/or 'Heritage tree'. The research is based on cataloging systems, after recognizing, localizing and selecting of specimen. This way, the systemic relation between the tree and its surrounding is being explored to get a view on the importance of trees in certain cultural and ecological landscapes -such as the oak fields consisting of ancient pollards in the Basque Country-. After a first inventory a detailed study is performed of each registered tree and, using a parametric method, criteria for selection of elements and evaluation are defined -ecological and those concerning the landscape, as well as ethnographical and cultural-. The creation of different indexes of singularity for trees, using qualitative as well as quantitative models, serves as a way to categorize the selected trees. A revision is done on legislation and the protection regimen, analyzing on a local, autonomic state and national level, parting from the concept of the Singular Tree, as included in the Law 16/1994, of Nature Conservation of Basque Coutry. This review proves the diversity of contexts and meanings in which the trees are being presented. Also an (un)definition appears; certain ambiguity of the definition which induces different interpretations and nomenclatures in an attempt to limit categorization in order to legally regulate. These concepts created out of environment protection politics do not always turn out to be completely effective. The Catalogue of Singular Trees in the Basque Country, created by decree as an instrument to value these natural resources, has not been updated since almost twenty years. However, there have been initiatives of amplification such as the work promoted by the Environment and Biodiversity Service of Provincial Council of Alava to create the inventory of singular trees of the Historical Territory of Álava and the catalogue proposal which forms the starting point of this PhD thesis. Parting from these considerations and the development of models for evaluation and cataloging of the registered specimen, the investigation attempts to unravel the way we observe the trees with which we link, how they are identified, by which intangible parameters we assess them and why we need to classify them. This study ends with proposals and alternative actions for the conservation and improvement of the trees that are being proposed as being singular. Among them are the publication and creating of awareness to guarantee the commitment and the development of an open catalogue for significant trees.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las limitaciones en la capacidad de movilidad pueden ser consideradas en la actualidad como una forma de discriminación tremenda y a la vez invisible dentro de la sociedad. Para poder evitar esta discriminación es necesario que las políticas de transporte, que hasta ahora han basado sus actuaciones particularmente sobre las necesidades de acceso al empleo, reconozcan las exigencias de las personas dependientes y de aquellas que realizan las tareas no remuneradas del cuidado de otros y de atención a la familia. Las personas que trabajan en las tareas domésticas, en la mayoría de los casos mujeres, tienen muchas dificultades para sincronizar sus obligaciones con los tiempos y las distancias. Estas personas desempeñan un trabajo diario, que tiene lugar tanto fuera como dentro del hogar y tienen necesidades específicas de movilidad. El problema principal es que este tipo de trabajo no suele ser tomado en consideración, ya que no entra en la esfera del trabajo remunerado. Pero es una labor que está estrictamente ligada a la actividad de la sociedad y es un elemento indispensable para el funcionamiento de la vida urbana. Es un trabajo real, que tiene lugar en el espacio urbano, que exige un considerable esfuerzo físico y emocional, y que ayuda a garantizar la calidad de la vida cotidiana. Es un aspecto indispensable a tener en cuenta en el ejercicio de las políticas públicas y sociales. Sobre la base de estas consideraciones se introduce el concepto “Movilidad del cuidado” (Sánchez de Madariaga, 2009a y 2009b), mediante el cual se reconoce la necesidad de evaluar y hacer visible los desplazamientos diarios asociados con el trabajo del cuidado. Por trabajo del cuidado se entiende el trabajo no remunerado, realizado por los adultos para los niños u otras personas dependientes, incluido el trabajo relacionado con el mantenimiento del hogar. El análisis de este tipo de viajes exige ciertos cambios significativos en las formas de recoger datos estadísticos. No se trata solo de sumar los desplazamientos que actualmente aparecen en las estadísticas como viajes de compras, acompañamiento, gestiones, cuidado de otros, etc. El problema es que los datos sobre movilidad se recogen con una serie de sesgos que infravaloran los viajes de cuidado: las estadísticas no cuentan los viajes cortos a pie y tampoco reflejan bien los viajes encadenados, ambos típicamente femeninos; no se deslindan con precisión los viajes de cuidado de otro tipo de viajes, de manera que muchos desplazamientos relacionados con la esfera reproductiva aparecen como viajes personales o de ocio y en muchos casos se encasillan en la categoría otros. Mediante esta investigación se pretende estimar el peso que la movilidad del cuidado tiene en el total de la movilidad y describirla de manera precisa en un contexto geográfico determinado, en el caso específico Madrid. Los estudios sobre el tema realizados hasta el momento reconocen la necesidad de llevar a cabo encuestas de movilidad que tengan en cuenta las variables socio económicas que caracterizan a la población y también se demanda la segregación de los datos recogidos por sexo, así como el uso del género como una categoría analítica. Igualmente es indispensable atribuir la misma importancia que tienen los viajes relacionados con la esfera productiva a los que están relacionados con la esfera reproductiva. No obstante, es solo mediante la introducción del concepto de “movilidad del cuidado” que se propone una nueva categorización de los motivos de desplazamiento dentro de las “clásicas” encuestas de movilidad y, por primera vez, mediante esta investigación se aplica este concepto a un ejemplo práctico a partir del cual queda en evidencia la necesidad de un cambio de enfoque en las políticas de transporte. Así, a través el uso de encuestas cuantitativas y cualitativas realizadas ad hoc sobre la base de la metodología propuesta, se capturan los patrones de viajes significativos para poder describir en maneara exhaustiva la movilidad de las personas que tienen responsabilidades de cuidado. El objetivo es crear una base de conocimiento más amplia sobre los patrones de movilidad, comportamientos y necesidades, además de mejorar los conceptos operacionales y establecer políticas de transporte más equitativas, que respondan de mejor manera a las necesidades de género, beneficiando así a toda la sociedad. ABSTRACT Nowadays, limitations in urban mobility can be considered as some type of extreme discrimination, which remains invisible to the society. In order to avoid this kind of discrimination, new transport policies are required, especially considering that so far they have been based and organized particularly in relation to the needs derived from the access to employment. These policies must take into account the demands of people depending on the support of others as well as of unpaid caregivers in charge of looking after other individuals and taking care of the family. Most of the people devoted to domestic work, which in the vast majority of cases is carried out by women, find it difficult to coordinate their obligations with time and distances. These people carry out a daily job that takes place both inside their homes as well as outside, and they have specific mobility needs. The main issue is that this type of work is usually not taken into consideration, since it is not included under the scope of paid employment. However, this work is strictly related to the activities of society and is therefore a crucial element in the functioning of urban life. It is an actual job that takes place in an urban space, requires a considerable amount of physical and emotional effort and guarantees quality of life on a daily basis. This is an important aspect that should be taken into account when drafting public and social policies. The concept of “Mobility of care” (Sánchez de Madariaga, 2009a and 2009b) is introduced under these considerations. This concept acknowledges the need to evaluate and identify daily movements from one place to another that are related to caregiving. Caregiving is understood, in this case, as unpaid work that is carried out by adults for children and other people that depend on the support of others, and it also includes duties related to home maintenance. The analysis of these types of movements requires some significant changes in the way in which statistic data is gathered. The idea is to not only add up the movements that appear in statistics such as shopping trips, accompanying someone, dealings, caregiving, etc. The problem with data on mobility is that it is gathered with bias that undervalues mobility related to caregiving: statistics do not take into consideration short trips that are made walking nor do they properly reflect connected trips, both of which are typically feminine; moreover, there is an imprecise differentiation of trips related to caregiving and other kind of trips, and because of that, many of the trips related to the reproductive sphere are considered personal or recreational trips, and are many times labelled as others. This investigation aims to evaluate the importance that mobility of care has in mobility in general, describing it in a precise manner within a specific geographical context that in this case is Madrid. To this date, most of the studies on this subject have acknowledged the need to carry out mobility surveys that include socio economic variables that characterize the population and they have also requested that collected data is segregated by sex and that gender is used as an analytical category. Likewise, it is fundamental to give the same importance to mobility related to the sphere of reproduction as to that related to the sphere of productiveness. However, it is only through the implementation of the concept of “mobility of care” that a new categorization of mobility, within classic mobility surveys, is proposed. Also, for the first time and by this investigation, this concept is applied to a practical case, shining a light on the need to change the focus of transport policies. Through the use of ad hoc quantitative and qualitative surveys based on the proposed methodology, the patterns of relevant movements are identified in order to thoroughly describe the mobility of people responsible of caregiving. The aim is to create a broader knowledge base on patterns of mobility, behaviour and necessities, in addition to improving functional concepts and transport policies to make them more equitable and responsive to gender needs, thus benefitting society as a whole.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neurotrophic factors such as nerve growth factor (NGF) promote a wide variety of responses in neurons, including differentiation, survival, plasticity, and repair. Such actions often require changes in gene expression. To identify the regulated genes and thereby to more fully understand the NGF mechanism, we carried out serial analysis of gene expression (SAGE) profiling of transcripts derived from rat PC12 cells before and after NGF-promoted neuronal differentiation. Multiple criteria supported the reliability of the profile. Approximately 157,000 SAGE tags were analyzed, representing at least 21,000 unique transcripts. Of these, nearly 800 were regulated by 6-fold or more in response to NGF. Approximately 150 of the regulated transcripts have been matched to named genes, the majority of which were not previously known to be NGF-responsive. Functional categorization of the regulated genes provides insight into the complex, integrated mechanism by which NGF promotes its multiple actions. It is anticipated that as genomic sequence information accrues the data derived here will continue to provide information about neurotrophic factor mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Little is known about the physiological mechanisms subserving the experience of air hunger and the affective control of breathing in humans. Acute hunger for air after inhalation of CO2 was studied in nine healthy volunteers with positron emission tomography. Subjective breathlessness was manipulated while end-tidal CO2- was held constant. Subjects experienced a significantly greater sense of air hunger breathing through a face mask than through a mouthpiece. The statistical contrast between the two conditions delineated a distributed network of primarily limbic/paralimbic brain regions, including multiple foci in dorsal anterior and middle cingulate gyrus, insula/claustrum, amygdala/periamygdala, lingual and middle temporal gyrus, hypothalamus, pulvinar, and midbrain. This pattern of activations was confirmed by a correlational analysis with breathlessness ratings. The commonality of regions of mesencephalon, diencephalon and limbic/paralimbic areas involved in primal emotions engendered by the basic vegetative systems including hunger for air, thirst, hunger, pain, micturition, and sleep, is discussed with particular reference to the cingulate gyrus. A theory that the phylogenetic origin of consciousness came from primal emotions engendered by immediate threat to the existence of the organism is discussed along with an alternative hypothesis by Edelman that primary awareness emerged with processes of ongoing perceptual categorization giving rise to a scene [Edelman, G. M. (1992) Bright Air, Brilliant Fire (Penguin, London)].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While genome sequencing projects are advancing rapidly, EST sequencing and analysis remains a primary research tool for the identification and categorization of gene sequences in a wide variety of species and an important resource for annotation of genomic sequence. The TIGR Gene Indices (http://www.tigr.org/tdb/tgi.shtml) are a collection of species-specific databases that use a highly refined protocol to analyze EST sequences in an attempt to identify the genes represented by that data and to provide additional information regarding those genes. Gene Indices are constructed by first clustering, then assembling EST and annotated gene sequences from GenBank for the targeted species. This process produces a set of unique, high-fidelity virtual transcripts, or Tentative Consensus (TC) sequences. The TC sequences can be used to provide putative genes with functional annotation, to link the transcripts to mapping and genomic sequence data, to provide links between orthologous and paralogous genes and as a resource for comparative sequence analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Granular materials, such as sand, gravel, powders, and pharmaceutical pills, are large aggregates of macroscopic, individually solid particles, or “grains.” Far from being simple materials with simple properties, they display an astounding range of complex behavior that defies their categorization as solid, liquid, or gas. Just consider how sand can stream through the orifice of an hourglass yet support one's weight on the beach; how it can form patterns strikingly similar to a liquid when vibrated, yet respond to stirring by “unmixing” of large and small grains. Despite much effort, there still is no comprehensive understanding of other forms of matter, like ordinary fluids or solids. In what way, therefore, is granular matter special, and what makes it so difficult to understand? An emerging interdisciplinary approach to answering these questions focuses directly on the material's discontinuous granular nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A capacidade adaptativa às enchentes diz respeito à capacidade inerente de indivíduos ou de um sistema de se ajustar aos efeitos desse evento e lidar com ele, de modo a moderar seus danos potenciais. A cidade de São Paulo é particularmente vulnerável às enchentes devido ao seu histórico de uso e ocupação do solo. O objetivo deste trabalho é analisar a capacidade adaptativa a partir da realidade local de moradores do Jardim Pantanal, localizado na zona leste do município de São Paulo às várzeas do rio Tietê, a fim de propor ações que possam contribuir na construção dessa capacidade. A pesquisa foi desenvolvida por meio de levantamento documental e bibliográfico, entrevistas semiestruturadas, análise das transcrições, codificação, e categorização dos dados. As capacidades adaptativas genérica e específica nos níveis organizacionais individual e de sistema são baixas, e entre os determinantes da capacidade adaptativa às enchentes os recursos financeiros, a vulnerabilidade urbana e as estratégias de enfrentamento foram considerados os mais importantes, em nível individual. A falta de recursos, a irregularidade de rendimentos e a ausência de diversificação na fonte de renda limitam as opções disponíveis de moradia em áreas regulares e dificultam a mobilização de recursos para a adoção de medidas preventivas e de recuperação pós-evento. A vulnerabilidade urbana expressa-se pela ocupação em área irregular, onde não são realizados investimentos em medidas de infraestrutura por parte dos moradores, que poderiam reduzir a exposição aos impactos das enchentes, pois não se sabe até quando poderão permanecer na área. As estratégias de enfrentamento demonstram ter caráter apenas reativo sem qualquer planejamento, sendo decididas e tomadas reativamente quando a água sobe. Tendo em vista os aspectos observados, a construção da capacidade adaptativa às enchentes no Jardim Pantanal requer: a) entrosamento entre as medidas de adaptação autônomas (do indivíduo) e as planejadas (do sistema); b) ações de adaptação antecipatórias, mais do que responsivas; e c) medidas de adaptação de curto e longo prazos que considerem as vulnerabilidades que surgiram durante o período de adaptação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contemporary therapeutic circles utilize the concept of anxiety to describe a variety of disorders. Emotional reductionism is a detriment to the therapeutic community and the persons seeking its help. This dissertation proposes that attention to the emotion of fear clarifies our categorization of particular disorders and challenges emotional reductionism. I propose that the emotion of fear, through its theological relationship to hope, is useful in therapeutic practice for persons who experience trauma and PTSD. I explore the differences between fear and anxiety by deconstructing anxiety. Through this process, I develop four categories which help the emotion of fear stand independent of anxiety in therapy. Temporality, behaviors, antidote and objects are categories which distinguish fear from anxiety. Together, they provide the impetus to explore the emotion of fear. Understanding the emotion of fear requires an examination of its neurophysiological embodiment. This includes the brain structures responsible for fear production, its defensive behaviors and the evolutionary retention of fear. Dual inheritance evolutionary theory posits that we evolved physically and culturally, helping us understand the inescapability of fear and the unique threats humans fear. The threats humans react to develop through subjective interpretations of experience. Sometimes threats, through their presence in our memories and imaginations, inhibit a person's ability to live out a preferred identity and experience hope. Understanding fear as embodied and subjective is important. Process theology provides a religious framework through which fear can be interpreted. In this framework, fear is developed as an adaptive human response. Moreover, fear is useful to the divine-human relationship, revealing an undercurrent of hope. In the context of the divine-human relationship fear is understood as an initial aim which protects a person from a threat, but also preserves them for novel future relationships. Utilizing a "double-listening" stance, a therapist hears the traumatic narrative and counternarratives of resistance and resilience. These counternarratives express an orientation towards hopeful futures wherein persons thrive through living out a preferred identity. A therapeutic practice incorporating the emotion of fear will utilize the themes of survival, coping and thriving to enable persons to place their traumatic narrative within their meaning systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Specific training for conducting psychotherapy with gay men is limited for psychologists, particularly when using a Self Psychology theoretical orientation (Robertson, 1996). In fact, psychologists often are faced with conflicting and contradictory points of view that mirror society's condemnation of homosexuality (Robertson, 1996). This paper is written from a self-psychological perspective to address the lack of a constructive body of literature that explains the unique treatment needs which impact gay men. Estimates of the prevalence of male homosexuality have generated considerable debate. A common assumption is that there are homosexual and non-homosexual men. However, scientists have long been aware that sexual responsiveness to others of the same sex, like most human traits, is continuously distributed in the population (Michaels, 1996). Still the presumption exists that such traits are stable within each man over time (Michaels, 1996). Conflating same-sex sexual experiences with a categorization of the man as homosexual is problematic, in that defining sexuality solely on the basis of experience excludes people who fantasize about sex with others of the same sex but never have sexual contact. Thus, most modern conceptions of sexual orientation consider personal identification, sexual behavior, and sexual fantasy (McWhirter, Sanders & Reinisch, 1990). Gay men's mental health can only be understood in the context of homosexuality throughout history, since religious and moral objections to sexual attraction between men have existed for centuries. Men who desired other men were regarded as sinful and depraved if not ill or abnormal, and same sex contacts were not distinguished from lewd behaviors (Weeks, 1989). Although most people, regardless of sexual orientation, have experienced some feelings of personal rejection, rarely do heterosexuals become targets for disapproval based on the nature of their attractions and behaviors relative to the same and to the other sex. For lesbians, bisexuals, and gay men, however, homosexuality becomes the focus of aspects of themselves that make them feel hated and hateful (Isay, 1989). While gay men and lesbians are often considered together because of the same-sex nature of their relationships and the similar issues that they may experience in their treatment within society, there are many issues where they might be best studied separately. Issues involving with health, parenthood, sexuality and perceived roles and status in society, for example, are often related more to gender than to any shared concept of a 'gay and lesbian community'. Many issues surrounding lesbians and lesbian culture will have more to do with women's issues, and some issues involving with gay men will have more to do with the gay male subculture and with masculinity. The author of this paper has limited experience in working with lesbian and bisexual individuals, and although it is likely that some of the concepts articulated in this paper could translate to working with lesbian and bisexual individuals, further research is indicated to examine the beneficence of utilizing a Self Psychological orientation in psychotherapy with lesbian women and bisexual individuals. This paper presents an overview of the literature including historical treatments of homosexuality, the history of Self Psychology, key principles in Self Psychology, research on Self Psychology, identity development models for gay men, and Self Psychological perspectives on identity development related to gay men. The literature review is followed by a section on treatment implications for psychologists seeking to treat gay men, including case vignettes based on work from my own practice. I have preserved the anonymity of clients by changing demographics, and rearranging and combining presenting issues and historical backgrounds among the case examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uma imagem engloba informação que precisa ser organizada para interpretar e compreender seu conteúdo. Existem diversas técnicas computacionais para extrair a principal informação de uma imagem e podem ser divididas em três áreas: análise de cor, textura e forma. Uma das principais delas é a análise de forma, por descrever características de objetos baseadas em seus pontos fronteira. Propomos um método de caracterização de imagens, por meio da análise de forma, baseada nas propriedades espectrais do laplaciano em grafos. O procedimento construiu grafos G baseados nos pontos fronteira do objeto, cujas conexões entre vértices são determinadas por limiares T_l. A partir dos grafos obtêm-se a matriz de adjacência A e a matriz de graus D, as quais definem a matriz Laplaciana L=D -A. A decomposição espectral da matriz Laplaciana (autovalores) é investigada para descrever características das imagens. Duas abordagens são consideradas: a) Análise do vetor característico baseado em limiares e a histogramas, considera dois parâmetros o intervalo de classes IC_l e o limiar T_l; b) Análise do vetor característico baseado em vários limiares para autovalores fixos; os quais representam o segundo e último autovalor da matriz L. As técnicas foram testada em três coleções de imagens: sintéticas (Genéricas), parasitas intestinais (SADPI) e folhas de plantas (CNShape), cada uma destas com suas próprias características e desafios. Na avaliação dos resultados, empregamos o modelo de classificação support vector machine (SVM), o qual avalia nossas abordagens, determinando o índice de separação das categorias. A primeira abordagem obteve um acerto de 90 % com a coleção de imagens Genéricas, 88 % na coleção SADPI, e 72 % na coleção CNShape. Na segunda abordagem, obtém-se uma taxa de acerto de 97 % com a coleção de imagens Genéricas; 83 % para SADPI e 86 % no CNShape. Os resultados mostram que a classificação de imagens a partir do espectro do Laplaciano, consegue categorizá-las satisfatoriamente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Common bean is a major dietary component in several countries, but its productivity is negatively affected by abiotic stresses. Dissecting candidate genes involved in abiotic stress tolerance is a paramount step toward the improvement of common bean performance under such constraints. Thereby, this thesis presents a systematic analysis of the DEHYDRATION RESPONSIVE ELEMENT-BINDING (DREB) gene subfamily, which encompasses genes that regulate several processes during stress responses, but with limited information for common bean. First, a series of in silico analyses with sequences retrieved from the P. vulgaris genome on Phytozome supported the categorization of 54 putative PvDREB genes distributed within six phylogenetic subgroups (A-1 to A-6), along the 11 chromosomes. Second, we cloned four novel PvDREB genes and determined their inducibility-factors, including the dehydration-, salinity- and cold-inducible genes PvDREB1F and PvDREB5A, and the dehydration- and cold-inducible genes PvDREB2A and PvDREB6B. Afterwards, nucleotide polymorphisms were searched through Sanger sequencing along those genes, revealing a high number of single nucleotide polymorphisms within PvDREB6B by the comparison of Mesoamerican and Andean genotypes. The nomenclature of PvDREB6B is discussed in details. Furthermore, we used the BARCBean6K_3 SNP platform to identify and genotype the closest SNP to each one of the 54 PvDREB genes. We selected PvDREB6B for a broader study encompassing a collection of wild common bean accessions of Mesoamerican origin. The population structure of the wild beans was accessed using sequence polymorphisms of PvDREB6B. The genetic clusters were partially associated with variation in latitude, altitude, precipitation and temperature throughout the areas such beans are distributed. With an emphasis on drought stress, an adapted tube-screening method in greenhouse conditions enabled the phenotyping of several drought-related traits in the wild collection. Interestingly, our data revealed a correlation between root depth, plant height and biomass and the environmental data of the location of the accessions. Correlation was also observed between the population structure determined through PvDREB6B and the environmental data. An association study combining data from the SNP array and DREB polymorphisms enabled the detection of SNP associated with drought-related traits through a compressed mixed linear model (CMLM) analysis. This thesis highlighted important features of DREB genes in common bean, revealing candidates for further strategies aimed at improvement of abiotic stress tolerance, with emphasis on drought tolerance

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Em organizações que operam segundo a lógica de serviço há uma mudança estratégica, com o deslocamento da produção de um produto para um valor. Seguindo esta dinâmica, surgiram na Saúde Pública propostas de modelos de atenção alternativos ao hegemônico, centrado em procedimentos e equipamentos. O presente estudo analisou o modelo da Estratégia de Saúde da Família, cuja proposta centra-se nas necessidades do usuário e no vínculo usuário-equipe multiprofissional, tendo como objetivo investigar como a organização e as condições do trabalho influenciam na utilização de recursos imateriais pelas equipes. Consistiu em um estudo de caso realizado junto a oito equipes de saúde da família do município de Caraguatatuba/SP. A metodologia compreendeu observação direta e realização de grupos focais com os profissionais das equipes. A análise abrangeu a categorização dos temas mais relevantes, em especial aqueles que se relacionavam ao uso e desenvolvimento dos recursos imateriais. Os resultados indicaram que, embora os profissionais valorizassem os aspectos relacionais, o processo de trabalho das equipes encontrava-se centrado na produção de procedimentos e informações quantitativos dos atendimentos, não incorporadas às práticas do cuidado. Os recursos imateriais, bem como seus resultados, não encontravam uma forma sistematizada de avaliação. E, dessa forma, enfrentavam desafios para serem apropriados e desenvolvidos como conhecimento pela organização.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biotic indices have been developed to summarise information provided by benthic macroinvertebrates, but their use can require specialized taxonomic expertise as well as a time-consuming operation. Using high taxonomic level in biotic indices reduces sampling processing time but should be considered with caution, since assigning tolerance level to high taxonomic levels may cause uncertainty. A methodology for family level tolerance categorization based on the affinity of each family with disturbed or undisturbed conditions was employed. This family tolerance classification approach was tested in two different areas from Mediterranean Sea affected by sewage discharges. Biotic indices employed at family level responded correctly to sewage presence. However, in areas with different communities among stations and high diversity of species within each family, assigning the same tolerance level to a whole family could imply mistakes. Thus, use of high taxonomic level in biotic indices should be only restricted to areas where homogeneous community is presented and families across sites have similar species composition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta investigación, se ha analizado la gastronomía en la prensa española del siglo XIX a través del catálogo online de la Biblioteca Nacional. Los resultados, casi todos inéditos, revelan la importante presencia de la gastronomía en noticias, artículos y reportajes, y su desarrollo a partir de 1860, pero sobre todo en las dos últimas décadas de la centuria. Igualmente, ha sido posible plantear una categorización, así como esbozar parte de su historia y evolución en España.