31 resultados para Build-Up Back To Back LSB, Cold-Formed Steel Structures, Lateral Distortional Buckling

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cold-drawn steel rods and wires retain significant residual stresses as a consequence of the manufacturing process. These residual stresses are known to be detrimental for the mechanical properties of the wires and their durability in aggressive environments. Steel makers are aware of the problem and have developed post-drawing processes to try and reduce the residual stresses on the wires. The present authors have studied this problem for a number of years and have performed a detailed characterization of the residual stress state inside cold-drawn rods, including both experimental and numerical techniques. High-energy synchrotron sources have been particularly useful for this research. The results have shown how residual stresses evolve as a consequence of cold-drawing and how they change with subsequent post-drawing treatments. The authors have been able to measure for the first time a complete residual strain profile along the diameter in both phases (ferrite and cementite) of a cold-drawn steel rod.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many university courses such as Building Engineering or Technical Architectural, the high density of the contents included in the curriculum, make the student, after graduation, unable to develop the skills already acquired and evaluated in the disciplines of the first courses. From the Group of Educational Innovation at the Polytechnic University of Madrid (UPM) "Teaching of Structural Concrete" (GIEHE) we have conducted a study in which are valued specific skills acquired by students after the first courses of career. We have worked with students from UPM fourth-year career and with Technical Architecture students who have completed their studies and also have completed the Adaptation Course of Technical Architecture to the Building Engineer. The work is part of the Educational Innovation Project funded by the UPM "Integration of training and assessment of generic and specific skills in structural concrete" We have evaluated specific skills learned in the areas of durability and control of structural concrete structures. The results show that overall, students are not able to fully develop the skills already acquired earlier, even being these essential to their professional development. Possibly, the large amount of content taught in these degrees together with a teaching and assessment of "flat profile", ie, which are presented and evaluated with the same intensity as the fundamental and the accessory, are causes enough to cause these results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops an automatic procedure for the optimal numbering of members and nodes in tree structures. With it the stiffness matrix is optimally conditioned either if a direct solution algorithm or a frontal one is used to solve the system of equations. In spite of its effectiveness, the procedure is strikingly simple and so is the computer program shown below.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new algorithm for the design of prediction structures with low delay and limited penalty in the rate-distortion performance for multiview video coding schemes. This algorithm constitutes one of the elements of a framework for the analysis and optimization of delay in multiview coding schemes that is based in graph theory. The objective of the algorithm is to find the best combination of prediction dependencies to prune from a multiview prediction structure, given a number of cuts. Taking into account the properties of the graph-based analysis of the encoding delay, the algorithm is able to find the best prediction dependencies to eliminate from an original prediction structure, while limiting the number of cut combinations to evaluate. We show that this algorithm obtains optimum results in the reduction of the encoding latency with a lower computational complexity than exhaustive search alternatives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los llamados procesos creativos en general, y los del proyectar arquitectónico en particular, mantienen aproximaciones hacia el objeto centradas principalmente en el procedimiento, es decir, en lo estratégico, lo metodológico o/y lo paradigmático. En ellas, además, el potencial de información no suele ser completo ni contemplado o, si lo ha sido, de manera inconciente, o referido de nuevo a lo procedimental. Igualmente, se centra el interés de estas aproximaciones, o en el objeto propuesto o resultado, o en lo procesal, pero sin atender a su constitución, es decir, a la información misma. Por tanto, y como reclama la física, la base constituyente informacional de estas aproximaciones, no ha sido considerada hasta ahora, ni se ha intentado sistematizar. Junto a esta omisión, estos acercamientos no permiten que cada humano configure de manera autónoma, independiente e íntegramente su propio proceso pues, los comentados procedimientos, están apoyados en marcos contextuales, culturales o/y procesales, reflejando así una orientación limitada en espacio-tiempo. Es así que se propone una potencia, o “aquellas cualidades poseídas por las cosas en cuya virtud éstas son totalmente impasibles o inmutables, o no se dejan cambiar fácilmente…”, según la define Aristóteles en “Metafísica”, como la posibilidad de, a la vez, aludir a un abanico informacional completo y apuntar hacia la íntegra elaboración de un proceso personal propio. Desde lo informacional, que a su vez es energético dependiendo de la disciplina científica considerada, se diferencian, en primer lugar, unos atributos o términos mínimos, que son unas potencias que compendian el abanico informacional completo. Es decir, son mínimos máximos. Estos atributos forman la fase cualitativa de la información a la que se llama acompañamiento. En segundo lugar, y apoyado tanto en el funcionamiento cerebral, en el espacio-tiempo cuántico, como en los nuevos experimentos e investigaciones de la biología, la física y la neurociencia en especial, se abren líneas nuevas para acceder a la información, antes contemplada de manera lineal, local y como entidad separada. Por ello, esta segunda aproximación presenta una potencia de intensificación de datos, que permite un aumento de los mismos en el cerebro y, por ello, la posibilidad de evitar el problema del “papel en blanco”. A esta fase se la nombra promoción. En tercer lugar, ambas fases constituyen la generación como propuesta de tesis, siendo la misma un cambio de cualquier tipo en el que alguien es agente de algo, específicamente, cuando un humano es mediador entre sucesos. Fusionando ambas, se añade simultáneamente una con-formación potencial global, que es sinérgicamente más que la suma de las dos anteriores. De esta manera agrupadora, y con objeto de materializar y sistematizar ahora esta generación o potencia, se presenta una puesta en práctica. Para ello, se desarrolla un modelo analítico-geométrico-paramétrico y se expone su aplicación en dicho caso práctico. Además, dicho modelo presenta un funcionamiento autorreferido u holográfico, reflejando tanto a los estudios científicos revisados, como al propio funcionamiento de los atributos o/y de todas las potencias que se presentan en esta investigación. ABSTRACT Generally speaking the so-called creative processes, and particularly those of the architectural design, keep approaches into the object oriented mainly in the process, so to speak, into the strategical, the methodological and/ or into the paradigmatic. In addition, they don’t usually take into account the potential of information neither in a complete manner nor even contemplated or, if considered, worked out unconsciously, or referred back to the procedural. Similarly, the interest of these approaches is focused either in the proposed object or the output, or in the processual, but leaving their constituent out, being it the information itself. Therefore, as physics is claiming nowadays, the constituent core of these approaches have neither been taken into account so far, nor tried to systematize. Along with this omission, these approaches do not allow each human being to set up autonomously, independently and entirely her/ his own process, because the mentioned procedures are supported by contextual, cultural and/ or procedural frameworks, reflecting then a perspective limited in space-time. Thus a potency is proposed, or "those qualities possessed by things under which they are totally impassive or immutable, or are not easily changed...", as defined by Aristotle in "Metaphysics", as the possibility to, and at the same time, alluding to a full informational range and point out to a whole development of an own personal process. From the informational stand, which in turn is energetic depending on the scientific discipline considered, it is distinguished, in the first place, a minimum set of attributes or terms, which are potencies that summarize the full informational range. That is, they are maximum minimums. These attributes build up the qualitative phase of the information being called accompaniment. Secondly, and supported in the brain functioning, in quantum space-time, as in new experiments and research carried out by biology, physics and neuroscience especially, new lines to access information are being opened, being contemplated linearly, locally and as a detached entity before. Thus, this second approach comes up with a potency of data`s intensifying that allows an increase in the brain thereof and, therefore, the possibility of avoiding the problem of "the blank paper". Promotion is how this phase is appointed. In the third place, both phases form the generation as the dissertation proposal, being it a change of any kind in which someone is the agent of something, specifically, when a human being is the mediator in between events. Fusing both of them, a global potential formation-with is added simultaneously, which is synergistically greater than the sum of the previous two. In this grouping way, and now in order to materialize and systemize this generation or potency, an implementation is displayed. To this end, an analytical-geometrical-parametrical model is developed and put into practice as a case study. In addition, this model features a self-referral or holographic functioning, being aligned to both scientific reviewed studies, and the very functioning either of the attributes and/ or all the potencies that are introduced in this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main problems relief teams face after a natural or man-made disaster is how to plan rural road repair work tasks to take maximum advantage of the limited available financial and human resources. Previous research focused on speeding up repair work or on selecting the location of health centers to minimize transport times for injured citizens. In spite of the good results, this research does not take into account another key factor: survivor accessibility to resources. In this paper we account for the accessibility issue, that is, we maximize the number of survivors that reach the nearest regional center (cities where economic and social activity is concentrated) in a minimum time by planning which rural roads should be repaired given the available financial and human resources. This is a combinatorial problem since the number of connections between cities and regional centers grows exponentially with the problem size, and exact methods are no good for achieving an optimum solution. In order to solve the problem we propose using an Ant Colony System adaptation, which is based on ants? foraging behavior. Ants stochastically build minimal paths to regional centers and decide if damaged roads are repaired on the basis of pheromone levels, accessibility heuristic information and the available budget. The proposed algorithm is illustrated by means of an example regarding the 2010 Haiti earthquake, and its performance is compared with another metaheuristic, GRASP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta memoria de Tesis Doctoral se aborda el estudio paleobotánico de seis yacimientos tobáceos situados en las localidades burgalesas de Tubilla del Agua, Sedano, Herrán, Tobera y Frías, y en la alavesa de Ocio. El registro fósil encontrado en estos afloramientos se analiza de forma conjunta con el objetivo de conocer la evolución de la vegetación en el sector biogeográfico Castellano Cantábrico. Este sector se considera el territorio para el cual los hallazgos paleobotánicos son representativos y extrapolables, en tanto que constituye una región homogénea desde el punto de vista florístico, que abarca todos los yacimientos prospectados. El contexto temporal en el que se enmarca este estudio es el final del Cuaternario, desde el Pleistoceno Medio hasta la actualidad. Este intervalo se ha establecido a partir de la edad de los depósitos estudiados, la cual ha sido determinada —para los yacimientos de los que no se disponía de edades fiables— mediante la datación de muestras extraídas de las diferentes unidades litológicas identificadas. Para ello han sido empleadas las técnicas de carbono-14, desequilibrio de las series del uranio y racemización de aminoácidos. Los resultados geocronológicos obtenidos junto con el análisis geomorfológico de los yacimientos han permitido vincular la génesis de las 13 unidades litológicas identificadas con diferentes estadíos climáticos. Estos abarcan un amplio rango de condiciones ambientales, desde las más extremas del Último Máximo Glacial, hasta las más benignas de los Estadíos Isotópicos Marinos interglaciares 1 y 5. Como resultado de la prospección de los depósitos de toba fueron recuperados 1.820 fósiles, la mayoría impresiones foliares, pero también moldes de estróbilos femeninos, ramas y corteza, así como 42 carbones y restos subfósiles de Pinus sp. La identificación taxonómica de estos restos se ha realizado fundamentalmente a partir del análisis de caracteres diagnósticos morfológicos. Como resultado de ello, han sido descritos 28 taxones pertenecientes a las subclases Bryidae, Polypodiidae, Pinidae y Magnoliidae. La flora de los yacimientos estudiados se puede clasificar en tres grupos en función de sus requerimientos ecológicos: (i) uno formado por dos especies de alta tolerancia a la continentalidad —Pinus nigra y Quercus faginea—, las cuales aparecen bien representadas en la mayoría de los depósitos; (ii) otro constituido fundamentalmente por un conjunto de árboles y arbustos que habitualmente tienen el papel de especies acompañantes en los bosques ibéricos submediterráneos y eurosiberianos; y (iii) un tercer grupo compuesto por taxones hidrófitos o edafohigrófilos asociados al ecosistema del fitohermo activo y la vegetación de ribera. En el capítulo de Discusión se propone y analiza la hipótesis de que P. nigra y Q. faginea habrían sido las especies protagonistas de la vegetación zonal del sector Castellano Cantábrico durante el Cuaternario Final. Estas podrían haber persistido como tal incluso durante las épocas más frías, debido a su amplia valencia ecológica y a la capacidad de reproducirse vegetativamente en el caso del quejigo. Por el contrario, los taxones mesofíticos y eurosiberianos pudieron haber sufrido la expansión y retracción de sus poblaciones al ritmo de las oscilaciones climáticas. Sin embargo, la orografía diversa del sector Castellano Cantábrico proporciona emplazamientos en los que se combinan las diversas variables fisiográficas, de tal forma que pudieron haber existido microrrefugios en los que encontraron cobijo algunos taxones mesotérmicos y eurosiberianos durante los periodos glaciales. Por último, la historia evolutiva reciente de la vegetación de este territorio ha estado marcada por la acción antrópica, la cual empezó a ser manifiesta a partir del Neolítico. Esta se tradujo en la degradación y reducción de la cubierta forestal, así como en la extinción del pino laricio del Sector Castellano Cantábrico en los dos últimos milenios. ABSTRACT This PhD Dissertation focuses in the study of six tufa formations located nearby the villages of Tubilla del Agua, Sedano, Herrán, Tobera y Frías, all of them in the province of Burgos, and Ocio, which belongs to the province of Álava. We analyze the palaeobotanical archives of these sites with the purpose of unveiling and understanding the evolution of the vegetation of the Castilian Cantabrian biogeographical sector. This area is considered to be the territory that is represented in the palaeobotanical sample of the studied tufa archives. It is the homogeneous phitogeographical area with the lowest rank that include all the sites. The time frame of this study is the last part of the Quaternary, since the Middle Pleistocene to the present time. This interval is defined by the age of the tufa deposits, which were dated —for the ones that there were not available datings— throughout the analysis of 20 tufa samples taken from the 13 identified lithostratigraphic units. The age of the samples has been determined by using the methods of radiocarbon, U-Th dating and amino acid racemization. Chronological results, along with the chronostratigraphic study of the sites has allowed us to relate the build-up of the 13 identified lithostratigraphic units with different climatic stages. These structures were deposited in a wide range of climatic conditions, from the most extreme ones of the Last Glacial Maximum, to the warmer ones of the Marine Isotopic Stages 1 and 5. A total of 1,820 fossils were recovered from the tufa deposits, most of them were leaf impressions, but also pine cones, branches and bark moulds, along with charcoal and Pinus nigra macro remains. The taxonomical identification of these remains has been done mainly through the analysis of morphological traits. As a result of this process, 28 taxa belonging to the subclass of Bryidae, Polypodiidae, Pinidae and Magnoliidae were identified. The persistency of some taxa can be traced along different climatic stages in this fossil record. This fossil flora can be classified in three different groups: (i) the first one would be composed of two species with high continental climate tolerance —Pinus nigra y Quercus faginea—, which can be found in most of the deposits, (ii) the second group would be mostly formed by trees and shrubs that usually grow in the Iberian forests as an accessory species and (iii) the third one is composed of hydrophytes or hydrophilic taxa associate to the streams, riparian zones or the active tufa ecosystem. In the Discussion chapter we propose and analyse the hypothesis that P.nigra and Q. faginea were the main species of the zonal vegetation of the Castilian Cantabrian biogeographical sector during the last part of the Quaternary. This species could have persisted due to their wide ecological amplitude and also due to the capacity of asexual reproduction in the cases of the oak. On the other hand, mesophitic taxa could have suffered the retraction and expansion of their population following the climate oscillations. However, the diverse orography of the Castilian Cantabrian biogeographical sector provides a variety of combinations of physiographic variables, which could have been suitable refuges for some of the mesophitic taxa. The recent evolutionary history of the vegetation in this territory has been affected by human activities, which started to be relevant since the Neolithic. This led to a reduction of the forests and eventually, to the extinction of P. nigra in the Castilian Cantabrian biogeographical sector in the last two thousands of years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the participation of DAEDALUS at the LogCLEF lab in CLEF 2011. This year, the objectives of our participation are twofold. The first topic is to analyze if there is any measurable effect on the success of the search queries if the native language and the interface language chosen by the user are different. The idea is to determine if this difference may condition the way in which the user interacts with the search application. The second topic is to analyze the user context and his/her interaction with the system in the case of successful queries, to discover out any relation among the user native language, the language of the resource involved and the interaction strategy adopted by the user to find out such resource. Only 6.89% of queries are successful out of the 628,607 queries in the 320,001 sessions with at least one search query in the log. The main conclusion that can be drawn is that, in general for all languages, whether the native language matches the interface language or not does not seem to affect the success rate of the search queries. On the other hand, the analysis of the strategy adopted by users when looking for a particular resource shows that people tend to use the simple search tool, frequently first running short queries build up of just one specific term and then browsing through the results to locate the expected resource

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aquaponics is the science of integrating intensive fish aquaculture with plant production in recirculating water systems. Although ion waste production by fish cannot satisfy all plant requirements, less is known about the relationship between total feed provided for fish and the production of milliequivalents (mEq) of different macronutrients for plants, especially for nutrient flow hydroponics used for strawberry production in Spain. That knowledge is essential to consider the amount of macronutrients available in aquaculture systems so that farmers can estimate how much nutrient needs to be supplemented in the waste water from fish, to produce viable plant growth. In the present experiment, tilapia (Oreochromis niloticus L.) were grown in a small-scale recirculating system at two different densities while growth and feed consumption were noted every week for five weeks. At the same time points, water samples were taken to measure pH, EC25, HCO3 – , Cl – , NH4 + , NO2 – , NO3 – , H2PO4 – , SO4 2– , Na + , K+ , Ca 2+ and Mg 2+ build up. The total increase in mEq of each ion per kg of feed provided to the fish was highest for NO3 - , followed, in decreasing order, by Ca 2+ , H2PO4 – , K+ , Mg 2+ and SO4 2– . The total amount of feed required per mEq ranged from 1.61- 13.1 kg for the four most abundant ions (NO3 – , Ca 2+ , H2PO4 – and K+ ) at a density of 2 kg fish m–3 , suggesting that it would be rather easy to maintain small populations of fish to reduce the cost of hydroponic solution supplementation for strawberries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dislocation mobility —the relation between applied stress and dislocation velocity—is an important property to model the mechanical behavior of structural materials. These mobilities reflect the interaction between the dislocation core and the host lattice and, thus, atomistic resolution is required to capture its details. Because the mobility function is multiparametric, its computation is often highly demanding in terms of computational requirements. Optimizing how tractions are applied can be greatly advantageous in accelerating convergence and reducing the overall computational cost of the simulations. In this paper we perform molecular dynamics simulations of ½ 〈1 1 1〉 screw dislocation motion in tungsten using step and linear time functions for applying external stress. We find that linear functions over time scales of the order of 10–20 ps reduce fluctuations and speed up convergence to the steady-state velocity value by up to a factor of two.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El hormigón es uno de los materiales de construcción más empleados en la actualidad debido a sus buenas prestaciones mecánicas, moldeabilidad y economía de obtención, entre otras ventajas. Es bien sabido que tiene una buena resistencia a compresión y una baja resistencia a tracción, por lo que se arma con barras de acero para formar el hormigón armado, material que se ha convertido por méritos propios en la solución constructiva más importante de nuestra época. A pesar de ser un material profusamente utilizado, hay aspectos del comportamiento del hormigón que todavía no son completamente conocidos, como es el caso de su respuesta ante los efectos de una explosión. Este es un campo de especial relevancia, debido a que los eventos, tanto intencionados como accidentales, en los que una estructura se ve sometida a una explosión son, por desgracia, relativamente frecuentes. La solicitación de una estructura ante una explosión se produce por el impacto sobre la misma de la onda de presión generada en la detonación. La aplicación de esta carga sobre la estructura es muy rápida y de muy corta duración. Este tipo de acciones se denominan cargas impulsivas, y pueden ser hasta cuatro órdenes de magnitud más rápidas que las cargas dinámicas impuestas por un terremoto. En consecuencia, no es de extrañar que sus efectos sobre las estructuras y sus materiales sean muy distintos que las que producen las cargas habitualmente consideradas en ingeniería. En la presente tesis doctoral se profundiza en el conocimiento del comportamiento material del hormigón sometido a explosiones. Para ello, es crucial contar con resultados experimentales de estructuras de hormigón sometidas a explosiones. Este tipo de resultados es difícil de encontrar en la literatura científica, ya que estos ensayos han sido tradicionalmente llevados a cabo en el ámbito militar y los resultados obtenidos no son de dominio público. Por otra parte, en las campañas experimentales con explosiones llevadas a cabo por instituciones civiles el elevado coste de acceso a explosivos y a campos de prueba adecuados no permite la realización de ensayos con un elevado número de muestras. Por este motivo, la dispersión experimental no es habitualmente controlada. Sin embargo, en elementos de hormigón armado sometidos a explosiones, la dispersión experimental es muy acusada, en primer lugar, por la propia heterogeneidad del hormigón, y en segundo, por la dificultad inherente a la realización de ensayos con explosiones, por motivos tales como dificultades en las condiciones de contorno, variabilidad del explosivo, o incluso cambios en las condiciones atmosféricas. Para paliar estos inconvenientes, en esta tesis doctoral se ha diseñado un novedoso dispositivo que permite ensayar hasta cuatro losas de hormigón bajo la misma detonación, lo que además de proporcionar un número de muestras estadísticamente representativo, supone un importante ahorro de costes. Con este dispositivo se han ensayado 28 losas de hormigón, tanto armadas como en masa, de dos dosificaciones distintas. Pero además de contar con datos experimentales, también es importante disponer de herramientas de cálculo para el análisis y diseño de estructuras sometidas a explosiones. Aunque existen diversos métodos analíticos, hoy por hoy las técnicas de simulación numérica suponen la alternativa más avanzada y versátil para el cálculo de elementos estructurales sometidos a cargas impulsivas. Sin embargo, para obtener resultados fiables es crucial contar con modelos constitutivos de material que tengan en cuenta los parámetros que gobiernan el comportamiento para el caso de carga en estudio. En este sentido, cabe destacar que la mayoría de los modelos constitutivos desarrollados para el hormigón a altas velocidades de deformación proceden del ámbito balístico, donde dominan las grandes tensiones de compresión en el entorno local de la zona afectada por el impacto. En el caso de los elementos de hormigón sometidos a explosiones, las tensiones de compresión son mucho más moderadas, siendo las tensiones de tracción generalmente las causantes de la rotura del material. En esta tesis doctoral se analiza la validez de algunos de los modelos disponibles, confirmando que los parámetros que gobiernan el fallo de las losas de hormigón armado ante explosiones son la resistencia a tracción y su ablandamiento tras rotura. En base a los resultados anteriores se ha desarrollado un modelo constitutivo para el hormigón ante altas velocidades de deformación, que sólo tiene en cuenta la rotura por tracción. Este modelo parte del de fisura cohesiva embebida con discontinuidad fuerte, desarrollado por Planas y Sancho, que ha demostrado su capacidad en la predicción de la rotura a tracción de elementos de hormigón en masa. El modelo ha sido modificado para su implementación en el programa comercial de integración explícita LS-DYNA, utilizando elementos finitos hexaédricos e incorporando la dependencia de la velocidad de deformación para permitir su utilización en el ámbito dinámico. El modelo es estrictamente local y no requiere de remallado ni conocer previamente la trayectoria de la fisura. Este modelo constitutivo ha sido utilizado para simular dos campañas experimentales, probando la hipótesis de que el fallo de elementos de hormigón ante explosiones está gobernado por el comportamiento a tracción, siendo de especial relevancia el ablandamiento del hormigón. Concrete is nowadays one of the most widely used building materials because of its good mechanical properties, moldability and production economy, among other advantages. As it is known, it has high compressive and low tensile strengths and for this reason it is reinforced with steel bars to form reinforced concrete, a material that has become the most important constructive solution of our time. Despite being such a widely used material, there are some aspects of concrete performance that are not yet fully understood, as it is the case of its response to the effects of an explosion. This is a topic of particular relevance because the events, both intentional and accidental, in which a structure is subjected to an explosion are, unfortunately, relatively common. The loading of a structure due to an explosive event occurs due to the impact of the pressure shock wave generated in the detonation. The application of this load on the structure is very fast and of very short duration. Such actions are called impulsive loads, and can be up to four orders of magnitude faster than the dynamic loads imposed by an earthquake. Consequently, it is not surprising that their effects on structures and materials are very different than those that cause the loads usually considered in engineering. This thesis broadens the knowledge about the material behavior of concrete subjected to explosions. To that end, it is crucial to have experimental results of concrete structures subjected to explosions. These types of results are difficult to find in the scientific literature, as these tests have traditionally been carried out by armies of different countries and the results obtained are classified. Moreover, in experimental campaigns with explosives conducted by civil institutions the high cost of accessing explosives and the lack of proper test fields does not allow for the testing of a large number of samples. For this reason, the experimental scatter is usually not controlled. However, in reinforced concrete elements subjected to explosions the experimental dispersion is very pronounced. First, due to the heterogeneity of concrete, and secondly, because of the difficulty inherent to testing with explosions, for reasons such as difficulties in the boundary conditions, variability of the explosive, or even atmospheric changes. To overcome these drawbacks, in this thesis we have designed a novel device that allows for testing up to four concrete slabs under the same detonation, which apart from providing a statistically representative number of samples, represents a significant saving in costs. A number of 28 slabs were tested using this device. The slabs were both reinforced and plain concrete, and two different concrete mixes were used. Besides having experimental data, it is also important to have computational tools for the analysis and design of structures subjected to explosions. Despite the existence of several analytical methods, numerical simulation techniques nowadays represent the most advanced and versatile alternative for the assessment of structural elements subjected to impulsive loading. However, to obtain reliable results it is crucial to have material constitutive models that take into account the parameters that govern the behavior for the load case under study. In this regard it is noteworthy that most of the developed constitutive models for concrete at high strain rates arise from the ballistic field, dominated by large compressive stresses in the local environment of the area affected by the impact. In the case of concrete elements subjected to an explosion, the compressive stresses are much more moderate, while tensile stresses usually cause material failure. This thesis discusses the validity of some of the available models, confirming that the parameters governing the failure of reinforced concrete slabs subjected to blast are the tensile strength and softening behaviour after failure. Based on these results we have developed a constitutive model for concrete at high strain rates, which only takes into account the ultimate tensile strength. This model is based on the embedded Cohesive Crack Model with Strong Discontinuity Approach developed by Planas and Sancho, which has proved its ability in predicting the tensile fracture of plain concrete elements. The model has been modified for its implementation in the commercial explicit integration program LS-DYNA, using hexahedral finite elements and incorporating the dependence of the strain rate, to allow for its use in dynamic domain. The model is strictly local and does not require remeshing nor prior knowledge of the crack path. This constitutive model has been used to simulate two experimental campaigns, confirming the hypothesis that the failure of concrete elements subjected to explosions is governed by their tensile response, being of particular relevance the softening behavior of concrete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Irradiation with swift heavy ions (SHI), roughly defined as those having atomic masses larger than 15 and energies exceeding 1 MeV/amu, may lead to significant modification of the irradiated material in a nanometric region around the (straight) ion trajectory (latent tracks). In the case of amorphous silica, SHI irradiation originates nano-tracks of higher density than the virgin material (densification). As a result, the refractive index is increased with respect to that of the surroundings. Moreover, track overlapping leads to continuous amorphous layers that present a significant contrast with respect to the pristine substrate. We have recently demonstrated that SHI irradiation produces a large number of point defects, easily detectable by a number of experimental techniques (work presented in the parallel conference ICDIM). The mechanisms of energy transfer from SHI to the target material have their origin in the high electronic excitation induced in the solid. A number of phenomenological approaches have been employed to describe these mechanisms: coulomb explosion, thermal spike, non-radiative exciton decay, bond weakening. However, a detailed microscopic description is missing due to the difficulty of modeling the time evolution of the electronic excitation. In this work we have employed molecular dynamics (MD) calculations to determine whether the irradiation effects are related to the thermal phenomena described by MD (in the ps domain) or to electronic phenomena (sub-ps domain), e.g., exciton localization. We have carried out simulations of up to 100 ps with large boxes (30x30x8 nm3) using a home-modified version of MDCASK that allows us to define a central hot cylinder (ion track) from which heat flows to the surrounding cold bath (unirradiated sample). We observed that once the cylinder has cooled down, the Si and O coordination numbers are 4 and 2, respectively, as in virgin silica. On the other hand, the density of the (cold) cylinder increases with respect to that of silica and, furthermore, the silica network ring size decreases. Both effects are in agreement with the observed densification. In conclusion, purely thermal effects do not explain the generation of point defects upon irradiation, but they do account for the silica densification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Office automation is one of the fields where the complexity related with technologies and working environments can be best shown. This is the starting point we have chosen to build up a theoretical model that shows us a scene quite different from the one traditionally considered. Through the development of the model, the levels of complexity associated with office automation and office environments have been identified, establishing a relationship between them. Thus, the model allows to state a general principle for sociotechnical design of office automation systems, comprising the ontological distinctions needed to properly evaluate each particular technology and its virtual contribution to office automation. From this fact comes the model's taxonomic ability to draw a global perspective of the state-of-art in office automation technologies.