884 resultados para Selection and implementation methodology
Resumo:
La caracterización de los cultivos cubierta (cover crops) puede permitir comparar la idoneidad de diferentes especies para proporcionar servicios ecológicos como el control de la erosión, el reciclado de nutrientes o la producción de forrajes. En este trabajo se estudiaron bajo condiciones de campo diferentes técnicas para caracterizar el dosel vegetal con objeto de establecer una metodología para medir y comparar las arquitecturas de los cultivos cubierta más comunes. Se estableció un ensayo de campo en Madrid (España central) para determinar la relación entre el índice de área foliar (LAI) y la cobertura del suelo (GC) para un cultivo de gramínea, uno de leguminosa y uno de crucífera. Para ello se sembraron doce parcelas con cebada (Hordeum vulgare L.), veza (Vicia sativa L.), y colza (Brassica napus L.). En 10 fechas de muestreo se midieron el LAI (con estimaciones directas y del LAI-2000), la fracción interceptada de la radiación fotosintéticamente activa (FIPAR) y la GC. Un experimento de campo de dos años (Octubre-Abril) se estableció en la misma localización para evaluar diferentes especies (Hordeum vulgare L., Secale cereale L., x Triticosecale Whim, Sinapis alba L., Vicia sativa L.) y cultivares (20) en relación con su idoneidad para ser usadas como cultivos cubierta. La GC se monitorizó mediante análisis de imágenes digitales con 21 y 22 muestreos, y la biomasa se midió 8 y 10 veces, respectivamente para cada año. Un modelo de Gompertz caracterizó la cobertura del suelo hasta el decaimiento observado tras las heladas, mientras que la biomasa se ajustó a ecuaciones de Gompertz, logísticas y lineales-exponenciales. Al final del experimento se determinaron el C, el N y el contenido en fibra (neutrodetergente, ácidodetergente y lignina), así como el N fijado por las leguminosas. Se aplicó el análisis de decisión multicriterio (MCDA) con objeto de obtener un ranking de especies y cultivares de acuerdo con su idoneidad para actuar como cultivos cubierta en cuatro modalidades diferentes: cultivo de cobertura, cultivo captura, abono verde y forraje. Las asociaciones de cultivos leguminosas con no leguminosas pueden afectar al crecimiento radicular y a la absorción de N de ambos componentes de la mezcla. El conocimiento de cómo los sistemas radiculares específicos afectan al crecimiento individual de las especies es útil para entender las interacciones en las asociaciones, así como para planificar estrategias de cultivos cubierta. En un tercer ensayo se combinaron estudios en rhizotrones con extracción de raíces e identificación de especies por microscopía, así como con estudios de crecimiento, absorción de N y 15N en capas profundas del suelo. Las interacciones entre raíces en su crecimiento y en el aprovisionamiento de N se estudiaron para dos de los cultivares mejor valorados en el estudio previo: uno de cebada (Hordeum vulgare L. cv. Hispanic) y otro de veza (Vicia sativa L. cv. Aitana). Se añadió N en dosis de 0 (N0), 50 (N1) y 150 (N2) kg N ha-1. Como resultados del primer estudio, se ajustaron correctamente modelos lineales y cuadráticos a la relación entre la GC y el LAI para todos los cultivos, pero en la gramínea alcanzaron una meseta para un LAI>4. Antes de alcanzar la cobertura total, la pendiente de la relación lineal entre ambas variables se situó en un rango entre 0.025 y 0.030. Las lecturas del LAI-2000 estuvieron correlacionadas linealmente con el LAI, aunque con tendencia a la sobreestimación. Las correcciones basadas en el efecto de aglutinación redujeron el error cuadrático medio del LAI estimado por el LAI-2000 desde 1.2 hasta 0.5 para la crucífera y la leguminosa, no siendo efectivas para la cebada. Esto determinó que para los siguientes estudios se midieran únicamente la GC y la biomasa. En el segundo experimento, las gramíneas alcanzaron la mayor cobertura del suelo (83-99%) y la mayor biomasa (1226-1928 g m-2) al final del mismo. Con la mayor relación C/N (27-39) y contenido en fibra digestible (53-60%) y la menor calidad de residuo (~68%). La mostaza presentó elevadas GC, biomasa y absorción de N en el año más templado en similitud con las gramíneas, aunque escasa calidad como forraje en ambos años. La veza presentó la menor absorción de N (2.4-0.7 g N m-2) debido a la fijación de N (9.8-1.6 g N m-2) y escasa acumulación de N. El tiempo térmico hasta alcanzar el 30% de GC constituyó un buen indicador de especies de rápida cubrición. La cuantificación de las variables permitió hallar variabilidad entre las especies y proporcionó información para posteriores decisiones sobre la selección y manejo de los cultivos cubierta. La agregación de dichas variables a través de funciones de utilidad permitió confeccionar rankings de especies y cultivares para cada uso. Las gramíneas fueron las más indicadas para los usos de cultivo de cobertura, cultivo captura y forraje, mientras que las vezas fueron las mejor como abono verde. La mostaza alcanzó altos valores como cultivo de cobertura y captura en el primer año, pero el segundo decayó debido a su pobre actuación en los inviernos fríos. Hispanic fue el mejor cultivar de cebada como cultivo de cobertura y captura, mientras que Albacete como forraje. El triticale Titania alcanzó la posición más alta como cultiva de cobertura, captura y forraje. Las vezas Aitana y BGE014897 mostraron buenas aptitudes como abono verde y cultivo captura. El MCDA permitió la comparación entre especies y cultivares proporcionando información relevante para la selección y manejo de cultivos cubierta. En el estudio en rhizotrones tanto la mezcla de especies como la cebada alcanzaron mayor intensidad de raíces (RI) y profundidad (RD) que la veza, con valores alrededor de 150 cruces m-1 y 1.4 m respectivamente, comparados con 50 cruces m-1 y 0.9 m para la veza. En las capas más profundas del suelo, la asociación de cultivos mostró valores de RI ligeramente mayores que la cebada en monocultivo. La cebada y la asociación obtuvieron mayores valores de densidad de raíces (RLD) (200-600 m m-3) que la veza (25-130) entre 0.8 y 1.2 m de profundidad. Los niveles de N no mostraron efectos claros en RI, RD ó RLD, sin embargo, el incremento de N favoreció la proliferación de raíces de veza en la asociación en capas profundas del suelo, con un ratio cebada/veza situado entre 25 a N0 y 5 a N2. La absorción de N de la cebada se incrementó en la asociación a expensas de la veza (de ~100 a 200 mg planta-1). Las raíces de cebada en la asociación absorbieron también más nitrógeno marcado de las capas profundas del suelo (0.6 mg 15N planta-1) que en el monocultivo (0.3 mg 15N planta-1). ABSTRACT Cover crop characterization may allow comparing the suitability of different species to provide ecological services such as erosion control, nutrient recycling or fodder production. Different techniques to characterize plant canopy were studied under field conditions in order to establish a methodology for measuring and comparing cover crops canopies. A field trial was established in Madrid (central Spain) to determine the relationship between leaf area index (LAI) and ground cover (GC) in a grass, a legume and a crucifer crop. Twelve plots were sown with either barley (Hordeum vulgare L.), vetch (Vicia sativa L.), or rape (Brassica napus L.). On 10 sampling dates the LAI (both direct and LAI-2000 estimations), fraction intercepted of photosynthetically active radiation (FIPAR) and GC were measured. A two-year field experiment (October-April) was established in the same location to evaluate different species (Hordeum vulgare L., Secale cereale L., x Triticosecale Whim, Sinapis alba L., Vicia sativa L.) and cultivars (20) according to their suitability to be used as cover crops. GC was monitored through digital image analysis with 21 and 22 samples, and biomass measured 8 and 10 times, respectively for each season. A Gompertz model characterized ground cover until the decay observed after frosts, while biomass was fitted to Gompertz, logistic and linear-exponential equations. At the end of the experiment C, N, and fiber (neutral detergent, acid and lignin) contents, and the N fixed by the legumes were determined. Multicriteria decision analysis (MCDA) was applied in order to rank the species and cultivars according to their suitability to perform as cover crops in four different modalities: cover crop, catch crop, green manure and fodder. Intercropping legumes and non-legumes may affect the root growth and N uptake of both components in the mixture. The knowledge of how specific root systems affect the growth of the individual species is useful for understanding the interactions in intercrops as well as for planning cover cropping strategies. In a third trial rhizotron studies were combined with root extraction and species identification by microscopy and with studies of growth, N uptake and 15N uptake from deeper soil layers. The root interactions of root growth and N foraging were studied for two of the best ranked cultivars in the previous study: a barley (Hordeum vulgare L. cv. Hispanic) and a vetch (Vicia sativa L. cv. Aitana). N was added at 0 (N0), 50 (N1) and 150 (N2) kg N ha-1. As a result, linear and quadratic models fitted to the relationship between the GC and LAI for all of the crops, but they reached a plateau in the grass when the LAI > 4. Before reaching full cover, the slope of the linear relationship between both variables was within the range of 0.025 to 0.030. The LAI-2000 readings were linearly correlated with the LAI but they tended to overestimation. Corrections based on the clumping effect reduced the root mean square error of the estimated LAI from the LAI-2000 readings from 1.2 to less than 0.50 for the crucifer and the legume, but were not effective for barley. This determined that in the following studies only the GC and biomass were measured. In the second experiment, the grasses reached the highest ground cover (83- 99%) and biomass (1226-1928 g/m2) at the end of the experiment. The grasses had the highest C/N ratio (27-39) and dietary fiber (53-60%) and the lowest residue quality (~68%). The mustard presented high GC, biomass and N uptake in the warmer year with similarity to grasses, but low fodder capability in both years. The vetch presented the lowest N uptake (2.4-0.7 g N/m2) due to N fixation (9.8-1.6 g N/m2) and low biomass accumulation. The thermal time until reaching 30% ground cover was a good indicator of early coverage species. Variable quantification allowed finding variability among the species and provided information for further decisions involving cover crops selection and management. Aggregation of these variables through utility functions allowed ranking species and cultivars for each usage. Grasses were the most suitable for the cover crop, catch crop and fodder uses, while the vetches were the best as green manures. The mustard attained high ranks as cover and catch crop the first season, but the second decayed due to low performance in cold winters. Hispanic was the most suitable barley cultivar as cover and catch crop, and Albacete as fodder. The triticale Titania attained the highest rank as cover and catch crop and fodder. Vetches Aitana and BGE014897 showed good aptitudes as green manures and catch crops. MCDA allowed comparison among species and cultivars and might provide relevant information for cover crops selection and management. In the rhizotron study the intercrop and the barley attained slightly higher root intensity (RI) and root depth (RD) than the vetch, with values around 150 crosses m-1 and 1.4 m respectively, compared to 50 crosses m-1 and 0.9 m for the vetch. At deep soil layers, intercropping showed slightly larger RI values compared to the sole cropped barley. The barley and the intercropping had larger root length density (RLD) values (200-600 m m-3) than the vetch (25-130) at 0.8-1.2 m depth. The topsoil N supply did not show a clear effect on the RI, RD or RLD; however increasing topsoil N favored the proliferation of vetch roots in the intercropping at deep soil layers, with the barley/vetch root ratio ranging from 25 at N0 to 5 at N2. The N uptake of the barley was enhanced in the intercropping at the expense of the vetch (from ~100 mg plant-1 to 200). The intercropped barley roots took up more labeled nitrogen (0.6 mg 15N plant-1) than the sole-cropped barley roots (0.3 mg 15N plant-1) from deep layers.
Resumo:
En la actualidad existe un gran conocimiento en la caracterización de rellenos hidráulicos, tanto en su caracterización estática, como dinámica. Sin embargo, son escasos en la literatura estudios más generales y globales de estos materiales, muy relacionados con sus usos y principales problemáticas en obras portuarias y mineras. Los procedimientos semi‐empíricos para la evaluación del efecto silo en las celdas de cajones portuarios, así como para el potencial de licuefacción de estos suelos durantes cargas instantáneas y terremotos, se basan en estudios donde la influencia de los parámetros que los rigen no se conocen en gran medida, dando lugar a resultados con considerable dispersión. Este es el caso, por ejemplo, de los daños notificados por el grupo de investigación del Puerto de Barcelona, la rotura de los cajones portuarios en el Puerto de Barcelona en 2007. Por estos motivos y otros, se ha decidido desarrollar un análisis para la evaluación de estos problemas mediante la propuesta de una metodología teórico‐numérica y empírica. El enfoque teórico‐numérico desarrollado en el presente estudio se centra en la determinación del marco teórico y las herramientas numéricas capaces de solventar los retos que presentan estos problemas. La complejidad del problema procede de varios aspectos fundamentales: el comportamiento no lineal de los suelos poco confinados o flojos en procesos de consolidación por preso propio; su alto potencial de licuefacción; la caracterización hidromecánica de los contactos entre estructuras y suelo (camino preferencial para el flujo de agua y consolidación lateral); el punto de partida de los problemas con un estado de tensiones efectivas prácticamente nulo. En cuanto al enfoque experimental, se ha propuesto una metodología de laboratorio muy sencilla para la caracterización hidromecánica del suelo y las interfaces, sin la necesidad de usar complejos aparatos de laboratorio o procedimientos excesivamente complicados. Este trabajo incluye por tanto un breve repaso a los aspectos relacionados con la ejecución de los rellenos hidráulicos, sus usos principales y los fenómenos relacionados, con el fin de establecer un punto de partida para el presente estudio. Este repaso abarca desde la evolución de las ecuaciones de consolidación tradicionales (Terzaghi, 1943), (Gibson, English & Hussey, 1967) y las metodologías de cálculo (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) hasta las contribuciones en relación al efecto silo (Ranssen, 1985) (Ravenet, 1977) y sobre el fenómeno de la licuefacción (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). Con motivo de este estudio se ha desarrollado exclusivamente un código basado en el método de los elementos finitos (MEF) empleando el programa MATLAB. Para ello, se ha esablecido un marco teórico (Biot, 1941) (Zienkiewicz & Shiomi, 1984) (Segura & Caron, 2004) y numérico (Zienkiewicz & Taylor, 1989) (Huerta & Rodríguez, 1992) (Segura & Carol, 2008) para resolver problemas de consolidación multidimensional con condiciones de contorno friccionales, y los correspondientes modelos constitutivos (Pastor & Zienkiewicz, 1986) (Fiu & Liu, 2011). Asimismo, se ha desarrollado una metodología experimental a través de una serie de ensayos de laboratorio para la calibración de los modelos constitutivos y de la caracterización de parámetros índice y de flujo (Castro, 1969) (Bahda 1997) (Been & Jefferies, 2006). Para ello se han empleado arenas de Hostun como material (relleno hidráulico) de referencia. Como principal aportación se incluyen una serie de nuevos ensayos de corte directo para la caracterización hidromecánica de la interfaz suelo – estructura de hormigón, para diferentes tipos de encofrados y rugosidades. Finalmente, se han diseñado una serie de algoritmos específicos para la resolución del set de ecuaciones diferenciales de gobierno que definen este problema. Estos algoritmos son de gran importancia en este problema para tratar el procesamiento transitorio de la consolidación de los rellenos hidráulicos, y de otros efectos relacionados con su implementación en celdas de cajones, como el efecto silo y la licuefacciones autoinducida. Para ello, se ha establecido un modelo 2D axisimétrico, con formulación acoplada u‐p para elementos continuos y elementos interfaz (de espesor cero), que tratan de simular las condiciones de estos rellenos hidráulicos cuando se colocan en las celdas portuarias. Este caso de estudio hace referencia clara a materiales granulares en estado inicial muy suelto y con escasas tensiones efectivas, es decir, con prácticamente todas las sobrepresiones ocasionadas por el proceso de autoconsolidación (por peso propio). Por todo ello se requiere de algoritmos numéricos específicos, así como de modelos constitutivos particulares, para los elementos del continuo y para los elementos interfaz. En el caso de la simulación de diferentes procedimientos de puesta en obra de los rellenos se ha requerido la modificacion de los algoritmos empleados para poder así representar numéricamente la puesta en obra de estos materiales, además de poder realizar una comparativa de los resultados para los distintos procedimientos. La constante actualización de los parámetros del suelo, hace también de este algoritmo una potente herramienta que permite establecer un interesante juego de perfiles de variables, tales como la densidad, el índice de huecos, la fracción de sólidos, el exceso de presiones, y tensiones y deformaciones. En definitiva, el modelo otorga un mejor entendimiento del efecto silo, término comúnmente usado para definir el fenómeno transitorio del gradiente de presiones laterales en las estructuras de contención en forma de silo. Finalmente se incluyen una serie de comparativas entre los resultados del modelo y de diferentes estudios de la literatura técnica, tanto para el fenómeno de las consolidaciones por preso propio (Fredlund, Donaldson & Gitirana, 2009) como para el estudio del efecto silo (Puertos del Estado, 2006, EuroCódigo (2006), Japan Tech, Stands. (2009), etc.). Para concluir, se propone el diseño de un prototipo de columna de decantación con paredes friccionales, como principal propuesta de futura línea de investigación. Wide research is nowadays available on the characterization of hydraulic fills in terms of either static or dynamic behavior. However, reported comprehensive analyses of these soils when meant for port or mining works are scarce. Moreover, the semi‐empirical procedures for assessing the silo effect on cells in floating caissons, and the liquefaction potential of these soils during sudden loads or earthquakes are based on studies where the underlying influence parameters are not well known, yielding results with significant scatter. This is the case, for instance, of hazards reported by the Barcelona Liquefaction working group, with the failure of harbor walls in 2007. By virtue of this, a complex approach has been undertaken to evaluate the problem by a proposal of numerical and laboratory methodology. Within a theoretical and numerical scope, the study is focused on the numerical tools capable to face the different challenges of this problem. The complexity is manifold; the highly non‐linear behavior of consolidating soft soils; their potentially liquefactable nature, the significance of the hydromechanics of the soil‐structure contact, the discontinuities as preferential paths for water flow, setting “negligible” effective stresses as initial conditions. Within an experimental scope, a straightforward laboratory methodology is introduced for the hydromechanical characterization of the soil and the interface without the need of complex laboratory devices or cumbersome procedures. Therefore, this study includes a brief overview of the hydraulic filling execution, main uses (land reclamation, filled cells, tailing dams, etc.) and the underlying phenomena (self‐weight consolidation, silo effect, liquefaction, etc.). It comprises from the evolution of the traditional consolidation equations (Terzaghi, 1943), (Gibson, English, & Hussey, 1967) and solving methodologies (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) to the contributions in terms of silo effect (Ranssen, 1895) (Ravenet, 1977) and liquefaction phenomena (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). The novelty of the study lies on the development of a Finite Element Method (FEM) code, exclusively formulated for this problem. Subsequently, a theoretical (Biot, 1941) (Zienkiewicz and Shiomi, 1984) (Segura and Carol, 2004) and numerical approach (Zienkiewicz and Taylor, 1989) (Huerta, A. & Rodriguez, A., 1992) (Segura, J.M. & Carol, I., 2008) is introduced for multidimensional consolidation problems with frictional contacts and the corresponding constitutive models (Pastor & Zienkiewicz, 1986) (Fu & Liu, 2011). An experimental methodology is presented for the laboratory test and material characterization (Castro 1969) (Bahda 1997) (Been & Jefferies 2006) using Hostun sands as reference hydraulic fill. A series of singular interaction shear tests for the interface calibration is included. Finally, a specific model algorithm for the solution of the set of differential equations governing the problem is presented. The process of consolidation and settlements involves a comprehensive simulation of the transient process of decantation and the build‐up of the silo effect in cells and certain phenomena related to self‐compaction and liquefaction. For this, an implementation of a 2D axi‐syimmetric coupled model with continuum and interface elements, aimed at simulating conditions and self‐weight consolidation of hydraulic fills once placed into floating caisson cells or close to retaining structures. This basically concerns a loose granular soil with a negligible initial effective stress level at the onset of the process. The implementation requires a specific numerical algorithm as well as specific constitutive models for both the continuum and the interface elements. The simulation of implementation procedures for the fills has required the modification of the algorithm so that a numerical representation of these procedures is carried out. A comparison of the results for the different procedures is interesting for the global analysis. Furthermore, the continuous updating of the model provides an insightful logging of variable profiles such as density, void ratio and solid fraction profiles, total and excess pore pressure, stresses and strains. This will lead to a better understanding of complex phenomena such as the transient gradient in lateral pressures due to silo effect in saturated soils. Interesting model and literature comparisons for the self‐weight consolidation (Fredlund, Donaldson, & Gitirana, 2009) and the silo effect results (Puertos del Estado (2006), EuroCode (2006), Japan Tech, Stands. (2009)). This study closes with the design of a decantation column prototype with frictional walls as the main future line of research.
Resumo:
In this paper we focus on the selection of safeguards in a fuzzy risk analysis and management methodology for information systems (IS). Assets are connected by dependency relationships, and a failure of one asset may affect other assets. After computing impact and risk indicators associated with previously identified threats, we identify and apply safeguards to reduce risks in the IS by minimizing the transmission probabilities of failures throughout the asset network. However, as safeguards have associated costs, the aim is to select the safeguards that minimize costs while keeping the risk within acceptable levels. To do this, we propose a dynamic programming-based method that incorporates simulated annealing to tackle optimizations problems.
Resumo:
El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.
Resumo:
Tendo em conta a importância assumida das bolas paradas (BP), como elementos decisivos no desenvolvimento e decisão nos jogos de futebol, o objetivo do estudo foi construir e validar um Sistema de Observação em Competição no Futebol de Bolas Paradas (SOCFutBP), de acordo com a metodologia observacional e suportado por um software de análise de jogo (VideObserver). O estudo foi constituído por uma amostra de 80 ações de bolas paradas observadas numa das partes de um jogo de futebol do Campeonato Nacional de Juvenis de Sub-17. A metodologia de desenvolvimento do sistema de observação adotou os seguintes passos: definição de critérios (e respetivas categorias); seleção e adequação do instrumento; aperfeiçoamento e validação facial do sistema; validação propriamente dita do sistema (intra e inter-observadores) e aplicação do estudo piloto. O Sistema de Observação em Competição de Futebol de Bola Paradas (SOCFutBP) foi construído e validado apresentando dez critérios adequados e ajustados para a recolha e análise de dados no âmbito da investigação focada nas bolas paradas no Futebol, uma vez que todos os critérios apresentaram valores de K superiores a 0,75 na fiabilidade intra-observador e inter-observadores.
Resumo:
Covalent fusions between an mRNA and the peptide or protein that it encodes can be generated by in vitro translation of synthetic mRNAs that carry puromycin, a peptidyl acceptor antibiotic, at their 3′ end. The stable linkage between the informational (nucleic acid) and functional (peptide) domains of the resulting joint molecules allows a specific mRNA to be enriched from a complex mixture of mRNAs based on the properties of its encoded peptide. Fusions between a synthetic mRNA and its encoded myc epitope peptide have been enriched from a pool of random sequence mRNA-peptide fusions by immunoprecipitation. Covalent RNA-peptide fusions should provide an additional route to the in vitro selection and directed evolution of proteins.
Resumo:
Illegal dumping and improper disposal of pollutants in urban areas can contribute significant pollutant loads to the municipal separate storm sewer system (MS4) and natural environments. Illicit discharges to the MS4 can pose a significant risk to human and environmental health. The Clean Water Act requires that municipalities implement a legal mechanism and plan to detect and eliminate illicit discharges to the MS4. The methodology for program creation included the analysis of other municipal illicit discharge programs, review of state and federal guidance publications, and the review of illicit discharge case-studies. This paper describes a systematic approach applied to the creation and implementation of a legal ordinance and program manual designed for the purpose of illicit discharge detection and elimination (IDDE).
Resumo:
The construction industry has long been considered as highly fragmented and non-collaborative industry. This fragmentation sprouted from complex and unstructured traditional coordination processes and information exchanges amongst all parties involved in a construction project. This nature coupled with risk and uncertainty has pushed clients and their supply chain to search for new ways of improving their business process to deliver better quality and high performing product. This research will closely investigate the need to implement a Digital Nervous System (DNS), analogous to a biological nervous system, on the flow and management of digital information across the project lifecycle. This will be through direct examination of the key processes and information produced in a construction project and how a DNS can provide a well-integrated flow of digital information throughout the project lifecycle. This research will also investigate how a DNS can create a tight digital feedback loop that enables the organisation to sense, react and adapt to changing project conditions. A Digital Nervous System is a digital infrastructure that provides a well-integrated flow of digital information to the right part of the organisation at the right time. It provides the organisation with the relevant and up-to-date information it needs, for critical project issues, to aid in near real-time decision-making. Previous literature review and survey questionnaires were used in this research to collect and analyse data about information management problems of the industry – e.g. disruption and discontinuity of digital information flow due to interoperability issues, disintegration/fragmentation of the adopted digital solutions and paper-based transactions. Results analysis revealed efficient and effective information management requires the creation and implementation of a DNS.
Resumo:
Introduction. The idea that “merit” should be the guiding principle of judicial selections is a universal principle, unlikely to be contested in whatever legal system. What differs considerably across legal cultures, however, is the way in which “merit” is defined. For deeper cultural and historical reasons, the current definition of “merit” in the process of judicial selections in the Czech Republic, at least in the way it is implemented in the institutional settings, is an odd mongrel. The old technocratic Austrian judicial heritage has in some aspects merged with, in others was altered or destroyed, by the Communist past. After 1989, some aspects of the judicial organisation were amended, with the most problematic elements removed. Furthermore, several old as well as new provisions relating to the judiciary were struck down by the Constitutional Court. However, apart from these rather haphazard interventions, there has been neither a sustained discussion as to how a new judicial architecture and system of judicial appointments ought to look like nor much of broader, conceptual reform in this regard. Thus, some twenty five years after the Velvet Revolution of 1989, the guiding principles for judicial selection and appointments are still a debate to be had.
Resumo:
Tendo em conta a importância assumida das bolas paradas (BP), como elementos decisivos no desenvolvimento e decisão nos jogos de futebol, o objetivo do estudo foi construir e validar um Sistema de Observação em Competição no Futebol de Bolas Paradas (SOCFutBP), de acordo com a metodologia observacional e suportado por um software de análise de jogo (VideObserver). O estudo foi constituído por uma amostra de 80 ações de bolas paradas observadas numa das partes de um jogo de futebol do Campeonato Nacional de Juvenis de Sub-17. A metodologia de desenvolvimento do sistema de observação adotou os seguintes passos: definição de critérios (e respetivas categorias); seleção e adequação do instrumento; aperfeiçoamento e validação facial do sistema; validação propriamente dita do sistema (intra e inter-observadores) e aplicação do estudo piloto. O Sistema de Observação em Competição de Futebol de Bola Paradas (SOCFutBP) foi construído e validado apresentando dez critérios adequados e ajustados para a recolha e análise de dados no âmbito da investigação focada nas bolas paradas no Futebol, uma vez que todos os critérios apresentaram valores de K superiores a 0,75 na fiabilidade intra-observador e inter-observadores.
Resumo:
Seventy sorghum inbred lines which formed part of the Queensland Department of Primary Industries (QDPI) sorghum breeding program were screened with 104 previously mapped RFLP markers. The lines were related by pedigree and consisted of ancestral source lines, intermediate lines and recent releases from the program. We compared the effect of defining marker alleles using either identity by state (IBS) or identity by descent (IBD) on our capacity to trace markers through the pedigree and detect evidence of selection for particular alleles. Allelic identities defined using IBD were much more sensitive for detecting non-Mendelian segregation in this pedigree. Only one marker allele showed significant evidence of selection when IBS was used compared with ten regions with particular allelic identities when IBD was used. Regions under selection were compared with the location of QTLs for agronomic traits known to be under selection in the breeding program. Only two of the ten regions were associated with known QTLs that matched with knowledge of the agronomic characteristics of the ancestral lines. Some of the other regions were hypothesised to be associated with genes for particular traits based on the properties of the ancestral source lines.
Resumo:
This paper discusses how the AustLit: Australian Literature Gateway's interpretation, enhancement, and implementation of the International Federation of Library Associations and Institutions' Functional Requirements for Bibliographic Records (FRBR Final Report 1998) model is meeting the needs of Australian literature scholars for accurate bibliographic representation of the histories of literary texts. It also explores how the AustLit Gateway's underpinning research principles, which are based on the tradition of scholarly enumerative and descriptive bibliography, with enhancements from analytical bibliography and literary biography, have impacted upon our implementation of the FRBR model. The major enhancement or alteration to the model is the use of enhanced manifestations, which allow the full representation of all agents' contributions to be shown in a highly granular format by enabling creation events to be incorporated at all levels of the Work, Expression, and Manifestation nexus.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.
Resumo:
This paper reports on an assessment of an ongoing 6-Sigma program conducted within a UK based (US owned) automotive company. It gives an overview of the management of the 6-sigma programme and the 23 in-house methodology used. The analysis given in the paper pays particular focus to the financial impacts that individual projects have had. Three projects are chosen from the hundreds that have been completed and are discussed in detail, including which specific techniques have been used and how financially successful the projects were. Commentary is also given on the effectiveness of the overall program along with a critique of how the implementation of 6-Sigma could be more effectively managed in the future. This discussion particularly focuses upon issues such as: project selection and scoping, financial evaluation and data availability, organisational awareness, commitment and involvement, middle management support, functional variation, and maintaining momentum during the rollout of a lengthy program.