948 resultados para Mean Intensity of the Claim Process
Resumo:
La influencia de la aerodinámica en el diseño de los trenes de alta velocidad, unida a la necesidad de resolver nuevos problemas surgidos con el aumento de la velocidad de circulación y la reducción de peso del vehículo, hace evidente el interés de plantear un estudio de optimización que aborde tales puntos. En este contexto, se presenta en esta tesis la optimización aerodinámica del testero de un tren de alta velocidad, llevada a cabo mediante el uso de métodos de optimización avanzados. Entre estos métodos, se ha elegido aquí a los algoritmos genéticos y al método adjunto como las herramientas para llevar a cabo dicha optimización. La base conceptual, las características y la implementación de los mismos se detalla a lo largo de la tesis, permitiendo entender los motivos de su elección, y las consecuencias, en términos de ventajas y desventajas que cada uno de ellos implican. El uso de los algorimos genéticos implica a su vez la necesidad de una parametrización geométrica de los candidatos a óptimo y la generación de un modelo aproximado que complementa al método de optimización. Estos puntos se describen de modo particular en el primer bloque de la tesis, enfocada a la metodología seguida en este estudio. El segundo bloque se centra en la aplicación de los métodos a fin de optimizar el comportamiento aerodinámico del tren en distintos escenarios. Estos escenarios engloban los casos más comunes y también algunos de los más exigentes a los que hace frente un tren de alta velocidad: circulación en campo abierto con viento frontal o viento lateral, y entrada en túnel. Considerando el caso de viento frontal en campo abierto, los dos métodos han sido aplicados, permitiendo una comparación de las diferentes metodologías, así como el coste computacional asociado a cada uno, y la minimización de la resistencia aerodinámica conseguida en esa optimización. La posibilidad de evitar parametrizar la geometría y, por tanto, reducir el coste computacional del proceso de optimización es la característica más significativa de los métodos adjuntos, mientras que en el caso de los algoritmos genéticos se destaca la simplicidad y capacidad de encontrar un óptimo global en un espacio de diseño multi-modal o de resolver problemas multi-objetivo. El caso de viento lateral en campo abierto considera nuevamente los dos métoxi dos de optimización anteriores. La parametrización se ha simplificado en este estudio, lo que notablemente reduce el coste numérico de todo el estudio de optimización, a la vez que aún recoge las características geométricas más relevantes en un tren de alta velocidad. Este análisis ha permitido identificar y cuantificar la influencia de cada uno de los parámetros geométricos incluídos en la parametrización, y se ha observado que el diseño de la arista superior a barlovento es fundamental, siendo su influencia mayor que la longitud del testero o que la sección frontal del mismo. Finalmente, se ha considerado un escenario más a fin de validar estos métodos y su capacidad de encontrar un óptimo global. La entrada de un tren de alta velocidad en un túnel es uno de los casos más exigentes para un tren por el pico de sobrepresión generado, el cual afecta a la confortabilidad del pasajero, así como a la estabilidad del vehículo y al entorno próximo a la salida del túnel. Además de este problema, otro objetivo a minimizar es la resistencia aerodinámica, notablemente superior al caso de campo abierto. Este problema se resuelve usando algoritmos genéticos. Dicho método permite obtener un frente de Pareto donde se incluyen el conjunto de óptimos que minimizan ambos objetivos. ABSTRACT Aerodynamic design of trains influences several aspects of high-speed trains performance in a very significant level. In this situation, considering also that new aerodynamic problems have arisen due to the increase of the cruise speed and lightness of the vehicle, it is evident the necessity of proposing an optimization study concerning the train aerodynamics. Thus, the aerodynamic optimization of the nose shape of a high-speed train is presented in this thesis. This optimization is based on advanced optimization methods. Among these methods, genetic algorithms and the adjoint method have been selected. A theoretical description of their bases, the characteristics and the implementation of each method is detailed in this thesis. This introduction permits understanding the causes of their selection, and the advantages and drawbacks of their application. The genetic algorithms requirethe geometrical parameterization of any optimal candidate and the generation of a metamodel or surrogate model that complete the optimization process. These points are addressed with a special attention in the first block of the thesis, focused on the methodology considered in this study. The second block is referred to the use of these methods with the purpose of optimizing the aerodynamic performance of a high-speed train in several scenarios. These scenarios englobe the most representative operating conditions of high-speed trains, and also some of the most exigent train aerodynamic problems: front wind and cross-wind situations in open air, and the entrance of a high-speed train in a tunnel. The genetic algorithms and the adjoint method have been applied in the minimization of the aerodynamic drag on the train with front wind in open air. The comparison of these methods allows to evaluate the methdology and computational cost of each one, as well as the resulting minimization of the aerodynamic drag. Simplicity and robustness, the straightforward realization of a multi-objective optimization, and the capability of searching a global optimum are the main attributes of genetic algorithm. However, the requirement of geometrically parameterize any optimal candidate is a significant drawback that is avoided with the use of the adjoint method. This independence of the number of design variables leads to a relevant reduction of the pre-processing and computational cost. Considering the cross-wind stability, both methods are used again for the minimization of the side force. In this case, a simplification of the geometric parameterization of the train nose is adopted, what dramatically reduces the computational cost of the optimization process. Nevertheless, some of the most important geometrical characteristics are still described with this simplified parameterization. This analysis identifies and quantifies the influence of each design variable on the side force on the train. It is observed that the A-pillar roundness is the most demanding design parameter, with a more important effect than the nose length or the train cross-section area. Finally, a third scenario is considered for the validation of these methods in the aerodynamic optimization of a high-speed train. The entrance of a train in a tunnel is one of the most exigent train aerodynamic problems. The aerodynamic consequences of high-speed trains running in a tunnel are basically resumed in two correlated phenomena, the generation of pressure waves and an increase in aerodynamic drag. This multi-objective optimization problem is solved with genetic algorithms. The result is a Pareto front where a set of optimal solutions that minimize both objectives.
Resumo:
The N+2 ion yield of the N2 molecule has been measured at the N 1s → Rydberg excitations. It displays Fano-type line shapes due to interference between direct outer-valence photoionization and participator decay of the core-excited Rydberg states. The N+2 ion yield is compared with the total intensity of the outer-valence photoelectron lines obtained recently with electron spectroscopy (Kivimäki et al 2012 Phys. Rev. A 86 012516). The increasing difference between the two curves at the higher core-to-Rydberg excitations is most likely due to soft x-ray emission processes that are followed by autoionization. The results also suggest that resonant Auger decay from the core–valence doubly excited states contributes to the N+2 ion yield at the photon energies that are located on both sides of the N 1s ionization limit.
Resumo:
Endo-β-mannanases (MAN; EC. 3.2.1.78) catalyze the cleavage of β1[RIGHTWARDS ARROW]4 bonds in mannan polymers and have been associated with the process of weakening the tissues surrounding the embryo during seed germination. In germinating Arabidopsis thaliana seeds, the most highly expressed MAN gene is AtMAN7 and its transcripts are restricted to the micropylar endosperm and to the radicle tip just before radicle emergence. Mutants with a T-DNA insertion in AtMAN7 have a slower germination than the wild type. To gain insight into the transcriptional regulation of the AtMAN7 gene, a bioinformatic search for conserved non-coding cis-elements (phylogenetic shadowing) within the Brassicaceae MAN7 gene promoters has been done, and these conserved motifs have been used as bait to look for their interacting transcription factors (TFs), using as a prey an arrayed yeast library from A. thaliana. The basic-leucine zipper TF AtbZIP44, but not the closely related AtbZIP11, has thus been identified and its transcriptional activation upon AtMAN7 has been validated at the molecular level. In the knock-out lines of AtbZIP44, not only is the expression of the AtMAN7 gene drastically reduced, but these mutants have a significantly slower germination than the wild type, being affected in the two phases of the germination process, both in the rupture of the seed coat and in the breakage of the micropylar endosperm cell walls. In the over-expression lines the opposite phenotype is observed.
Resumo:
Flows of relevance to new generation aerospace vehicles exist, which are weakly dependent on the streamwise direction and strongly dependent on the other two spatial directions, such as the flow around the (flattened) nose of the vehicle and the associated elliptic cone model. Exploiting these characteristics, a parabolic integration of the Navier-Stokes equations is more appropriate than solution of the full equations, resulting in the so-called Parabolic Navier-Stokes (PNS). This approach not only is the best candidate, in terms of computational efficiency and accuracy, for the computation of steady base flows with the appointed properties, but also permits performing instability analysis and laminar-turbulent transition studies a-posteriori to the base flow computation. This is to be contrasted with the alternative approach of using order-of-magnitude more expensive spatial Direct Numerical Simulations (DNS) for the description of the transition process. The PNS equations used here have been formulated for an arbitrary coordinate transformation and the spatial discretization is performed using a novel stable high-order finite-difference-based numerical scheme, ensuring the recovery of highly accurate solutions using modest computing resources. For verification purposes, the boundary layer solution around a circular cone at zero angle of attack is compared in the incompressible limit with theoretical profiles. Also, the recovered shock wave angle at supersonic conditions is compared with theoretical predictions in the same circular-base cone geometry. Finally, the entire flow field, including shock position and compressible boundary layer around a 2:1 elliptic cone is recovered at Mach numbers 3 and 4
Resumo:
En la actualidad existe un gran conocimiento en la caracterización de rellenos hidráulicos, tanto en su caracterización estática, como dinámica. Sin embargo, son escasos en la literatura estudios más generales y globales de estos materiales, muy relacionados con sus usos y principales problemáticas en obras portuarias y mineras. Los procedimientos semi‐empíricos para la evaluación del efecto silo en las celdas de cajones portuarios, así como para el potencial de licuefacción de estos suelos durantes cargas instantáneas y terremotos, se basan en estudios donde la influencia de los parámetros que los rigen no se conocen en gran medida, dando lugar a resultados con considerable dispersión. Este es el caso, por ejemplo, de los daños notificados por el grupo de investigación del Puerto de Barcelona, la rotura de los cajones portuarios en el Puerto de Barcelona en 2007. Por estos motivos y otros, se ha decidido desarrollar un análisis para la evaluación de estos problemas mediante la propuesta de una metodología teórico‐numérica y empírica. El enfoque teórico‐numérico desarrollado en el presente estudio se centra en la determinación del marco teórico y las herramientas numéricas capaces de solventar los retos que presentan estos problemas. La complejidad del problema procede de varios aspectos fundamentales: el comportamiento no lineal de los suelos poco confinados o flojos en procesos de consolidación por preso propio; su alto potencial de licuefacción; la caracterización hidromecánica de los contactos entre estructuras y suelo (camino preferencial para el flujo de agua y consolidación lateral); el punto de partida de los problemas con un estado de tensiones efectivas prácticamente nulo. En cuanto al enfoque experimental, se ha propuesto una metodología de laboratorio muy sencilla para la caracterización hidromecánica del suelo y las interfaces, sin la necesidad de usar complejos aparatos de laboratorio o procedimientos excesivamente complicados. Este trabajo incluye por tanto un breve repaso a los aspectos relacionados con la ejecución de los rellenos hidráulicos, sus usos principales y los fenómenos relacionados, con el fin de establecer un punto de partida para el presente estudio. Este repaso abarca desde la evolución de las ecuaciones de consolidación tradicionales (Terzaghi, 1943), (Gibson, English & Hussey, 1967) y las metodologías de cálculo (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) hasta las contribuciones en relación al efecto silo (Ranssen, 1985) (Ravenet, 1977) y sobre el fenómeno de la licuefacción (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). Con motivo de este estudio se ha desarrollado exclusivamente un código basado en el método de los elementos finitos (MEF) empleando el programa MATLAB. Para ello, se ha esablecido un marco teórico (Biot, 1941) (Zienkiewicz & Shiomi, 1984) (Segura & Caron, 2004) y numérico (Zienkiewicz & Taylor, 1989) (Huerta & Rodríguez, 1992) (Segura & Carol, 2008) para resolver problemas de consolidación multidimensional con condiciones de contorno friccionales, y los correspondientes modelos constitutivos (Pastor & Zienkiewicz, 1986) (Fiu & Liu, 2011). Asimismo, se ha desarrollado una metodología experimental a través de una serie de ensayos de laboratorio para la calibración de los modelos constitutivos y de la caracterización de parámetros índice y de flujo (Castro, 1969) (Bahda 1997) (Been & Jefferies, 2006). Para ello se han empleado arenas de Hostun como material (relleno hidráulico) de referencia. Como principal aportación se incluyen una serie de nuevos ensayos de corte directo para la caracterización hidromecánica de la interfaz suelo – estructura de hormigón, para diferentes tipos de encofrados y rugosidades. Finalmente, se han diseñado una serie de algoritmos específicos para la resolución del set de ecuaciones diferenciales de gobierno que definen este problema. Estos algoritmos son de gran importancia en este problema para tratar el procesamiento transitorio de la consolidación de los rellenos hidráulicos, y de otros efectos relacionados con su implementación en celdas de cajones, como el efecto silo y la licuefacciones autoinducida. Para ello, se ha establecido un modelo 2D axisimétrico, con formulación acoplada u‐p para elementos continuos y elementos interfaz (de espesor cero), que tratan de simular las condiciones de estos rellenos hidráulicos cuando se colocan en las celdas portuarias. Este caso de estudio hace referencia clara a materiales granulares en estado inicial muy suelto y con escasas tensiones efectivas, es decir, con prácticamente todas las sobrepresiones ocasionadas por el proceso de autoconsolidación (por peso propio). Por todo ello se requiere de algoritmos numéricos específicos, así como de modelos constitutivos particulares, para los elementos del continuo y para los elementos interfaz. En el caso de la simulación de diferentes procedimientos de puesta en obra de los rellenos se ha requerido la modificacion de los algoritmos empleados para poder así representar numéricamente la puesta en obra de estos materiales, además de poder realizar una comparativa de los resultados para los distintos procedimientos. La constante actualización de los parámetros del suelo, hace también de este algoritmo una potente herramienta que permite establecer un interesante juego de perfiles de variables, tales como la densidad, el índice de huecos, la fracción de sólidos, el exceso de presiones, y tensiones y deformaciones. En definitiva, el modelo otorga un mejor entendimiento del efecto silo, término comúnmente usado para definir el fenómeno transitorio del gradiente de presiones laterales en las estructuras de contención en forma de silo. Finalmente se incluyen una serie de comparativas entre los resultados del modelo y de diferentes estudios de la literatura técnica, tanto para el fenómeno de las consolidaciones por preso propio (Fredlund, Donaldson & Gitirana, 2009) como para el estudio del efecto silo (Puertos del Estado, 2006, EuroCódigo (2006), Japan Tech, Stands. (2009), etc.). Para concluir, se propone el diseño de un prototipo de columna de decantación con paredes friccionales, como principal propuesta de futura línea de investigación. Wide research is nowadays available on the characterization of hydraulic fills in terms of either static or dynamic behavior. However, reported comprehensive analyses of these soils when meant for port or mining works are scarce. Moreover, the semi‐empirical procedures for assessing the silo effect on cells in floating caissons, and the liquefaction potential of these soils during sudden loads or earthquakes are based on studies where the underlying influence parameters are not well known, yielding results with significant scatter. This is the case, for instance, of hazards reported by the Barcelona Liquefaction working group, with the failure of harbor walls in 2007. By virtue of this, a complex approach has been undertaken to evaluate the problem by a proposal of numerical and laboratory methodology. Within a theoretical and numerical scope, the study is focused on the numerical tools capable to face the different challenges of this problem. The complexity is manifold; the highly non‐linear behavior of consolidating soft soils; their potentially liquefactable nature, the significance of the hydromechanics of the soil‐structure contact, the discontinuities as preferential paths for water flow, setting “negligible” effective stresses as initial conditions. Within an experimental scope, a straightforward laboratory methodology is introduced for the hydromechanical characterization of the soil and the interface without the need of complex laboratory devices or cumbersome procedures. Therefore, this study includes a brief overview of the hydraulic filling execution, main uses (land reclamation, filled cells, tailing dams, etc.) and the underlying phenomena (self‐weight consolidation, silo effect, liquefaction, etc.). It comprises from the evolution of the traditional consolidation equations (Terzaghi, 1943), (Gibson, English, & Hussey, 1967) and solving methodologies (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) to the contributions in terms of silo effect (Ranssen, 1895) (Ravenet, 1977) and liquefaction phenomena (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). The novelty of the study lies on the development of a Finite Element Method (FEM) code, exclusively formulated for this problem. Subsequently, a theoretical (Biot, 1941) (Zienkiewicz and Shiomi, 1984) (Segura and Carol, 2004) and numerical approach (Zienkiewicz and Taylor, 1989) (Huerta, A. & Rodriguez, A., 1992) (Segura, J.M. & Carol, I., 2008) is introduced for multidimensional consolidation problems with frictional contacts and the corresponding constitutive models (Pastor & Zienkiewicz, 1986) (Fu & Liu, 2011). An experimental methodology is presented for the laboratory test and material characterization (Castro 1969) (Bahda 1997) (Been & Jefferies 2006) using Hostun sands as reference hydraulic fill. A series of singular interaction shear tests for the interface calibration is included. Finally, a specific model algorithm for the solution of the set of differential equations governing the problem is presented. The process of consolidation and settlements involves a comprehensive simulation of the transient process of decantation and the build‐up of the silo effect in cells and certain phenomena related to self‐compaction and liquefaction. For this, an implementation of a 2D axi‐syimmetric coupled model with continuum and interface elements, aimed at simulating conditions and self‐weight consolidation of hydraulic fills once placed into floating caisson cells or close to retaining structures. This basically concerns a loose granular soil with a negligible initial effective stress level at the onset of the process. The implementation requires a specific numerical algorithm as well as specific constitutive models for both the continuum and the interface elements. The simulation of implementation procedures for the fills has required the modification of the algorithm so that a numerical representation of these procedures is carried out. A comparison of the results for the different procedures is interesting for the global analysis. Furthermore, the continuous updating of the model provides an insightful logging of variable profiles such as density, void ratio and solid fraction profiles, total and excess pore pressure, stresses and strains. This will lead to a better understanding of complex phenomena such as the transient gradient in lateral pressures due to silo effect in saturated soils. Interesting model and literature comparisons for the self‐weight consolidation (Fredlund, Donaldson, & Gitirana, 2009) and the silo effect results (Puertos del Estado (2006), EuroCode (2006), Japan Tech, Stands. (2009)). This study closes with the design of a decantation column prototype with frictional walls as the main future line of research.
Resumo:
Esta tesis trata sobre métodos de corrección que compensan la variación de las condiciones de iluminación en aplicaciones de imagen y video a color. Estas variaciones hacen que a menudo fallen aquellos algoritmos de visión artificial que utilizan características de color para describir los objetos. Se formulan tres preguntas de investigación que definen el marco de trabajo de esta tesis. La primera cuestión aborda las similitudes que se dan entre las imágenes de superficies adyacentes en relación a su comportamiento fotométrico. En base al análisis del modelo de formación de imágenes en situaciones dinámicas, esta tesis propone un modelo capaz de predecir las variaciones de color de la región de una determinada imagen a partir de las variaciones de las regiones colindantes. Dicho modelo se denomina Quotient Relational Model of Regions. Este modelo es válido cuando: las fuentes de luz iluminan todas las superficies incluídas en él; estas superficies están próximas entre sí y tienen orientaciones similares; y cuando son en su mayoría lambertianas. Bajo ciertas circunstancias, la respuesta fotométrica de una región se puede relacionar con el resto mediante una combinación lineal. No se ha podido encontrar en la literatura científica ningún trabajo previo que proponga este tipo de modelo relacional. La segunda cuestión va un paso más allá y se pregunta si estas similitudes se pueden utilizar para corregir variaciones fotométricas desconocidas en una región también desconocida, a partir de regiones conocidas adyacentes. Para ello, se propone un método llamado Linear Correction Mapping capaz de dar una respuesta afirmativa a esta cuestión bajo las circunstancias caracterizadas previamente. Para calcular los parámetros del modelo se requiere una etapa de entrenamiento previo. El método, que inicialmente funciona para una sola cámara, se amplía para funcionar en arquitecturas con varias cámaras sin solape entre sus campos visuales. Para ello, tan solo se necesitan varias muestras de imágenes del mismo objeto capturadas por todas las cámaras. Además, este método tiene en cuenta tanto las variaciones de iluminación, como los cambios en los parámetros de exposición de las cámaras. Todos los métodos de corrección de imagen fallan cuando la imagen del objeto que tiene que ser corregido está sobreexpuesta o cuando su relación señal a ruido es muy baja. Así, la tercera cuestión se refiere a si se puede establecer un proceso de control de la adquisición que permita obtener una exposición óptima cuando las condiciones de iluminación no están controladas. De este modo, se propone un método denominado Camera Exposure Control capaz de mantener una exposición adecuada siempre y cuando las variaciones de iluminación puedan recogerse dentro del margen dinámico de la cámara. Los métodos propuestos se evaluaron individualmente. La metodología llevada a cabo en los experimentos consistió en, primero, seleccionar algunos escenarios que cubrieran situaciones representativas donde los métodos fueran válidos teóricamente. El Linear Correction Mapping fue validado en tres aplicaciones de re-identificación de objetos (vehículos, caras y personas) que utilizaban como caracterísiticas la distribución de color de éstos. Por otra parte, el Camera Exposure Control se probó en un parking al aire libre. Además de esto, se definieron varios indicadores que permitieron comparar objetivamente los resultados de los métodos propuestos con otros métodos relevantes de corrección y auto exposición referidos en el estado del arte. Los resultados de la evaluación demostraron que los métodos propuestos mejoran los métodos comparados en la mayoría de las situaciones. Basándose en los resultados obtenidos, se puede decir que las respuestas a las preguntas de investigación planteadas son afirmativas, aunque en circunstancias limitadas. Esto quiere decir que, las hipótesis planteadas respecto a la predicción, la corrección basada en ésta y la auto exposición, son factibles en aquellas situaciones identificadas a lo largo de la tesis pero que, sin embargo, no se puede garantizar que se cumplan de manera general. Por otra parte, se señalan como trabajo de investigación futuro algunas cuestiones nuevas y retos científicos que aparecen a partir del trabajo presentado en esta tesis. ABSTRACT This thesis discusses the correction methods used to compensate the variation of lighting conditions in colour image and video applications. These variations are such that Computer Vision algorithms that use colour features to describe objects mostly fail. Three research questions are formulated that define the framework of the thesis. The first question addresses the similarities of the photometric behaviour between images of dissimilar adjacent surfaces. Based on the analysis of the image formation model in dynamic situations, this thesis proposes a model that predicts the colour variations of the region of an image from the variations of the surrounded regions. This proposed model is called the Quotient Relational Model of Regions. This model is valid when the light sources illuminate all of the surfaces included in the model; these surfaces are placed close each other, have similar orientations, and are primarily Lambertian. Under certain circumstances, a linear combination is established between the photometric responses of the regions. Previous work that proposed such a relational model was not found in the scientific literature. The second question examines whether those similarities could be used to correct the unknown photometric variations in an unknown region from the known adjacent regions. A method is proposed, called Linear Correction Mapping, which is capable of providing an affirmative answer under the circumstances previously characterised. A training stage is required to determine the parameters of the model. The method for single camera scenarios is extended to cover non-overlapping multi-camera architectures. To this extent, only several image samples of the same object acquired by all of the cameras are required. Furthermore, both the light variations and the changes in the camera exposure settings are covered by correction mapping. Every image correction method is unsuccessful when the image of the object to be corrected is overexposed or the signal-to-noise ratio is very low. Thus, the third question refers to the control of the acquisition process to obtain an optimal exposure in uncontrolled light conditions. A Camera Exposure Control method is proposed that is capable of holding a suitable exposure provided that the light variations can be collected within the dynamic range of the camera. Each one of the proposed methods was evaluated individually. The methodology of the experiments consisted of first selecting some scenarios that cover the representative situations for which the methods are theoretically valid. Linear Correction Mapping was validated using three object re-identification applications (vehicles, faces and persons) based on the object colour distributions. Camera Exposure Control was proved in an outdoor parking scenario. In addition, several performance indicators were defined to objectively compare the results with other relevant state of the art correction and auto-exposure methods. The results of the evaluation demonstrated that the proposed methods outperform the compared ones in the most situations. Based on the obtained results, the answers to the above-described research questions are affirmative in limited circumstances, that is, the hypothesis of the forecasting, the correction based on it, and the auto exposure are feasible in the situations identified in the thesis, although they cannot be guaranteed in general. Furthermore, the presented work raises new questions and scientific challenges, which are highlighted as future research work.
Resumo:
The purpose of this paper is to expose the importance of observing cultural systems present in a territory as a reference for the design of urban infrastructures in the new cities and regions of rapid development. If we accept the idea that architecture is an instrument or cultural system developed by man to act as an intermediary to the environment, it is necessary to understand the elemental interaction between man and his environment to meet a satisfactory design. To illustrate this purpose, we present the case of the Eurasian Mediterranean region, where the architectural culture acts as a cultural system of adaptation to the environment and it is formed by an ancient process of selection. From simple observation of architectural types, construction systems and environmental mechanisms treasured in mediterranean historical heritage we can extract crucial information about this elemental interaction. Mediterranean architectural culture has environmental mechanisms responding to the needs of basics habitability, ethnics and passive conditioning. These mechanisms can be basis of an innovative design without compromising the diversity and lifestyles of human groups in the region. The main fundament of our investigation is the determination of the historical heritage of domestic architecture as holder of the formation process of these mechanisms. The result allows us to affirm that the successful introduction of new urban infrastructures in an area need a reliable reference and it must be a cultural system that entailing in essence the environmental conditioning of human existence. The urban infrastructures must be sustainable, understood and accepted by the inhabitants. The last condition is more important when the urban infrastructures are implemented in areas that are developing rapidly or when there is no architectural culture.
Resumo:
In a general situation a non-uniform velocity field gives rise to a shift of the otherwise straight acoustic pulse trajectory between the transmitter and receiver transducers of a sonic anemometer. The aim of this paper is to determine the effects of trajectory shifts on the velocity as measured by the sonic anemometer. This determination has been accomplished by developing a mathematical model of the measuring process carried out by sonic anemometers; a model which includes the non-straight trajectory effect. The problem is solved by small perturbation techniques, based on the relevant small parameter of the problem, the Mach number of the reference flow, M. As part of the solution, a general analytical expression for the deviations of the computed measured speed from the nominal speed has been obtained. The correction terms of both the transit time and of the measured speed are of M 2 order in rotational velocity field. The method has been applied to three simple, paradigmatic flows: one-directional horizontal and vertical shear flows, and mixed with a uniform horizontal flow.
Resumo:
The measurement deviations of cup anemometers are studied by analyzing the rotational speed of the rotor at steady state (constant wind speed). The differences of the measured rotational speed with respect to the averaged one based on complete turns of the rotor are produced by the harmonic terms of the rotational speed. Cup anemometer sampling periods include a certain number of complete turns of the rotor, plus one incomplete turn, the residuals from the harmonic terms integration within that incomplete turn (as part of the averaging process) being responsible for the mentioned deviations. The errors on the rotational speed due to the harmonic terms are studied analytically and then experimentally, with data from more than 500 calibrations performed on commercial anemometers.
Resumo:
For the energy valorization of alperujo, residue of the olive oil two phases extraction process, it is necessary to perform a drying process to reduce moisture content from over 60% to less than 10%. In order to reduce primary energy consumption and get an economic return, usually in this kind of drying facilities Gas Turbine CHP is used as a heat source. There have been recently in Spain some fires in this kind of GT-CHP facilities, which have caused high material losses. In some of these fires it has been suggested that the fire was caused by the output of incandescent alperujo in the flue gasesof the drying system. Therefore, the aim of this study is to determine experimentally and analytically under which operational conditions a process of alperujo self-ignition in the drying process can begin, and determine the actual fire hazard in this type of TG-CHP system. For analytical study, the temperature and initial composition of the combustion gases of the Gas Turbine at the entrance of the drying process was calculated and the gas equilibrium conditions reached in contact with the biomass were calculated and, therefore, the temperature of the biomass during the drying process. Moreover, the layer and dust ignition temperature of alperujo has been experimentally determined, according to EN 50281-2-1: 2000. With these results, the operating conditions of the drying process, in which there are real risk of auto-ignition of alperujo have been established.Para la valorización energética del alperujo, residuo del proceso de extracción en dos fases del aceite de oliva, es necesario realizar un proceso de secado para reducir su contenido de humedad de más del 60% al 10% m/m en b.h. Con el fin de reducir el consumo de energía primaria y obtener una rentabilidad económica, normalmente en este tipo de instalaciones de secado se usa la cogeneración con turbina de gas (TG) como fuente de calor. En España en los últimos años han ocurrido algunos casos de incendio en este tipo de instalaciones de cogeneración, que han supuesto pérdidas materiales muy elevadas. Por esta razón, el objetivo de este trabajo es determinar analítica y experimentalmente las condiciones operativas del secadero bajo las cuales podría comenzar un proceso de autoinflamación del alperujo y determinar el riesgo real de incendio en este tipo de instalaciones. Para el estudio analítico, se ha planteado y validado el modelo matemático que permite calcular la temperatura y la composición de los gases de combustión a la entrada y a la salida del secadero, en función de las curvas características de la TG, de las condiciones atmosféricas, del caudal y del grado de humedad de la biomasa tratada. El modelo permite además calcular la temperatura de bulbo húmedo, que es la máxima temperatura que podría alcanzar la biomasa durante el proceso de secado y determinar la cantidad de biomasa que se puede secar completamente en función del caudal y de las condiciones de entrada de los gases de combustión. Con estos resultados y la temperatura mínima de autoinflamación del alperujo determinada experimentalmente siguiendo la norma EN 50281- 2-1:2000, se demuestra que en un proceso de secado de alperujo en condiciones normales de operación no existe riesgo de autoencendido que pueda dar origen a un incendio.
Resumo:
Polymers of N-substituted glycines (“peptoids”) containing chiral centers at the α position of their side chains can form stable structures in solution. We studied a prototypical peptoid, consisting of five para-substituted (S)-N-(1-phenylethyl)glycine residues, by NMR spectroscopy. Multiple configurational isomers were observed, but because of extensive signal overlap, only the major isomer containing all cis-amide bonds was examined in detail. The NMR data for this molecule, in conjunction with previous CD spectroscopic results, indicate that the major species in methanol is a right-handed helix with cis-amide bonds. The periodicity of the helix is three residues per turn, with a pitch of ≈6 Å. This conformation is similar to that anticipated by computational studies of a chiral peptoid octamer. The helical repeat orients the amide bond chromophores in a manner consistent with the intensity of the CD signal exhibited by this molecule. Many other chiral polypeptoids have similar CD spectra, suggesting that a whole family of peptoids containing chiral side chains is capable of adopting this secondary structure motif. Taken together, our experimental and theoretical studies of the structural properties of chiral peptoids lay the groundwork for the rational design of more complex polypeptoid molecules, with a variety of applications, ranging from nanostructures to nonviral gene delivery systems.
Resumo:
Maize (Zea mays ssp. mays) is genetically diverse, yet it is also morphologically distinct from its wild relatives. These two observations are somewhat contradictory: the first observation is consistent with a large historical population size for maize, but the latter observation is consistent with strong, diversity-limiting selection during maize domestication. In this study, we sampled sequence diversity, coupled with simulations of the coalescent process, to study the dynamics of a population bottleneck during the domestication of maize. To do this, we determined the DNA sequence of a 1,400-bp region of the Adh1 locus from 19 individuals representing maize, its presumed progenitor (Z. mays ssp. parviglumis), and a more distant relative (Zea luxurians). The sequence data were used to guide coalescent simulations of population bottlenecks associated with domestication. Our study confirms high genetic diversity in maize—maize contains 75% of the variation found in its progenitor and is more diverse than its wild relative, Z. luxurians—but it also suggests that sequence diversity in maize can be explained by a bottleneck of short duration and very small size. For example, the breadth of genetic diversity in maize is consistent with a founding population of only 20 individuals when the domestication event is 10 generations in length.
Resumo:
Formation of the neuromuscular junction (NMJ) depends upon a nerve-derived protein, agrin, acting by means of a muscle-specific receptor tyrosine kinase, MuSK, as well as a required accessory receptor protein known as MASC. We report that MuSK does not merely play a structural role by demonstrating that MuSK kinase activity is required for inducing acetylcholine receptor (AChR) clustering. We also show that MuSK is necessary, and that MuSK kinase domain activation is sufficient, to mediate a key early event in NMJ formation—phosphorylation of the AChR. However, MuSK kinase domain activation and the resulting AChR phosphorylation are not sufficient for AChR clustering; thus we show that the MuSK ectodomain is also required. These results indicate that AChR phosphorylation is not the sole trigger of the clustering process. Moreover, our results suggest that, unlike the ectodomain of all other receptor tyrosine kinases, the MuSK ectodomain plays a required role in addition to simply mediating ligand binding and receptor dimerization, perhaps by helping to recruit NMJ components to a MuSK-based scaffold.
Resumo:
An evolutionary process is simulated with a simple spin-glass-like model of proteins to examine the origin of folding ability. At each generation, sequences are randomly mutated and subjected to a simulation of the folding process based on the model. According to the frequency of local configurations at the active sites, sequences are selected and passed to the next generation. After a few hundred generations, a sequence capable of folding globally into a native conformation emerges. Moreover, the selected sequence has a distinct energy minimum and an anisotropic funnel on the energy surface, which are the imperative features for fast folding of proteins. The proposed model reveals that the functional selection on the local configurations leads a sequence to fold globally into a conformation at a faster rate.
Resumo:
Applying a brief repolarizing pre-pulse to a depolarized frog skeletal muscle fiber restores a small fraction of the transverse tubule membrane voltage sensors from the inactivated state. During a subsequent depolarizing test pulse we detected brief, highly localized elevations of myoplasmic Ca2+ concentration (Ca2+ “sparks”) initiated by restored voltage sensors in individual triads at all test pulse voltages. The latency histogram of these events gives the gating pattern of the sarcoplasmic reticulum (SR) calcium release channels controlled by the restored voltage sensors. Both event frequency and clustering of events near the start of the test pulse increase with test pulse depolarization. The macroscopic SR calcium release waveform, obtained from the spark latency histogram and the estimated open time of the channel or channels underlying a spark, exhibits an early peak and rapid marked decline during large depolarizations. For smaller depolarizations, the release waveform exhibits a smaller peak and a slower decline. However, the mean use time and mean amplitude of the individual sparks are quite similar at all test depolarizations and at all times during a given depolarization, indicating that the channel open times and conductances underlying sparks are essentially independent of voltage. Thus, the voltage dependence of SR Ca2+ release is due to changes in the frequency and pattern of occurrence of individual, voltage-independent, discrete release events.