928 resultados para CALCULATED LEVELS, J, PI USING SHELL MODEL


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Global Ocean Sampling (GOS) expedition is currently the largest and geographically most comprehensive metagenomic dataset, including samples from the Atlantic, Pacific, and Indian Oceans. This study makes use of the wide range of environmental conditions and habitats encompassed within the GOS sites in order to investigate the ecological structuring of bacterial and archaeal taxon ranks. Community structures based on taxonomically classified 16S ribosomal RNA (rRNA) gene fragments at phylum, class, order, family, and genus rank levels were examined using multivariate statistical analysis, and the results were inspected in the context of oceanographic environmental variables and structured habitat classifications. At all taxon rank levels, community structures of neritic, oceanic, estuarine biomes, as well as other exotic biomes (salt marsh, lake, mangrove), were readily distinguishable from each other. A strong structuring of the communities with chlorophyll a concentration and a weaker yet significant structuring with temperature and salinity were observed. Furthermore, there were significant correlations between community structures and habitat classification. These results were used for further investigation of one-to-one relationships between taxa and environment and provided indications for ecological preferences shaped by primary production for both cultured and uncultured bacterial and archaeal clades.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Question: How do tree species identity, microhabitat and water availability affect inter- and intra-specific interactions between juvenile and adult woody plants? Location: Continental Mediterranean forests in Alto Tajo Natural Park, Guadalajara, Spain. Methods: A total of 2066 juveniles and adults of four co-occurring tree species were mapped in 17 plots. The frequency of juveniles at different microhabitats and water availability levels was analysed using log-linear models. We used nearest-neighbour contingency table analysis of spatial segregation and J-functions to describe the spatial patterns. Results: We found a complex spatial pattern that varied according to species identity and microhabitat. Recruitment was more frequent in gaps for Quercus ilex, while the other three species recruited preferentially under shrubs or trees depending on the water availability level. Juveniles were not spatially associated to conspecific adults, experiencing segregation from them inmany cases. Spatial associations, both positive and negative, were more common at higher water availability levels. Conclusions: Our results do not agree with expectations from the stressgradient hypothesis, suggesting that positive interactions do not increase in importance with increasing aridity in the study ecosystem. Regeneration patterns are species-specific and depend on microhabitat characteristics and dispersal strategies. In general, juveniles do not look for conspecific adult protection. This work contributes to the understanding of species co-existence, proving the importance of considering a multispecies approach at several plots to overcome limitations of simple pair-wise comparisons in a limited number of sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an initiative for monitoring the competence acquisition by a team of students with different backgrounds facing the experience of being working by projects and in a project. These students are graduated bachelor engineering are inexperienced in the project management field and they play this course on a time-shared manner along with other activities. The goal of this experience is to increase the competence levels acquired by using an structured web based portfolio tool helping to reinforce how relevant different project management approaches can result for final products and how important it becomes to maintain the integration along the project. Monitoring is carried out by means of have a look on how the work is being done and measuring different technical parameters per participant. The use of this information could make possible to bring additional information to the students involved in terms of their individual competencies and the identification of new opportunities of personal improvement. These capabilities are strongly requested by companies in their daily work as well as they can be very convenient too for students when they try to organize their PhD work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An equivalent circuit model is applied in order to describe the operation characteristics of quantum dot intermediate band solar cells (QD-IBSCs), which accounts for the recombination paths of the intermediate band (IB) through conduction band (CB), the valence band (VB) through IB, and the VB-CB transition. In this work, fitting of the measured dark J-V curves for QD-IBSCs (QD region being non-doped or direct Si-doped to n-type) and a reference GaAs p-i-n solar cell (no QDs) were carried out using this model in order to extract the diode parameters. The simulation was then performed using the extracted diode parameters to evaluate solar cell characteristics under concentration. In the case of QDSC with Si-doped (hence partially-filled) QDs, a fast recovery of the open-circuit voltage (Voc) was observed in a range of low concentration due to the IB effect. Further, at around 100X concentration, Si-doped QDSC could outperform the reference GaAs p-i-n solar cell if the current source of IB current source were sixteen times to about 10mA/cm2 compared to our present cell.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A clear statement in these lines textually cited (Byers et al., 1938) defines the framework of this special issue: “True soil is the product of the action of climate and living organism upon the parent material, as conditioned by the local relief. The length of time during which these forces are operative is of great importance in determining the character of the ultimate product. Drainage conditions are also important and are controlled by local relief, by the nature of the parent material or underlying rock strata, or by the amount of precipitation in relation to rate of percolation and runoff water. There are, therefore, five principal factors of soil formation: Parent material, climate, biological activity, relief and time. These soil forming factors are interdependent, each modifying the effectiveness of the others.” Due to these various processes associated to its formation and genesis soil dynamics reveals high complexity that creates several levels of structure using this term in a broad sense

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se presenta un nuevo método de diseño conceptual en Ingeniería Aeronáutica basado el uso de modelos reducidos, también llamados modelos sustitutos (‘surrogates’). Los ingredientes de la función objetivo se calculan para cada indiviudo mediante la utilización de modelos sustitutos asociados a las distintas disciplinas técnicas que se construyen mediante definiciones de descomposición en valores singulares de alto orden (HOSVD) e interpolaciones unidimensionales. Estos modelos sustitutos se obtienen a partir de un número limitado de cálculos CFD. Los modelos sustitutos pueden combinarse, bien con un método de optimización global de tipo algoritmo genético, o con un método local de tipo gradiente. El método resultate es flexible a la par que mucho más eficiente, computacionalmente hablando, que los modelos convencionales basados en el cálculo directo de la función objetivo, especialmente si aparecen un gran número de parámetros de diseño y/o de modelado. El método se ilustra considerando una versión simplificada del diseño conceptual de un avión. Abstract An optimization method for conceptual design in Aeronautics is presented that is based on the use of surrogate models. The various ingredients in the target function are calculated for each individual using surrogates of the associated technical disciplines that are constructed via high order singular value decomposition and one dimensional interpolation. These surrogates result from a limited number of CFD calculated snapshots. The surrogates are combined with an optimization method, which can be either a global optimization method such as a genetic algorithm or a local optimization method, such as a gradient-like method. The resulting method is both flexible and much more computationally efficient than the conventional method based on direct calculation of the target function, especially if a large number of free design parameters and/or tunablemodeling parameters are present. The method is illustrated considering a simplified version of the conceptual design of an aircraft empennage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo del presente estudio es el análisis de un sistema complejo, el Acuífero de la Mancha Occidental, mediante un modelo numérico de simulación que represente de la manera más rigurosa posible la evolución del Sistema Acuífero 23. Este modelo se realiza en régimen permanente con una rigurosa configuración del sistema desde el punto de vista geológico y geométrico. De esta forma se deja iniciado y planeado un modelo y su estructura que será una base real de futuras formulaciones transitorias, base de los sucesivos análisis de explotación y predicción. . Las distintas situaciones que en las últimas décadas ha experimentado el Sistema 23, soluciones que se han dado para las mismas, y los posibles planes de actuación que se podrían llevar a cabo en un futuro frente a condiciones cambiantes de clima y explotación se podrán estudiar a partir del presente modelo de simulación numérica. ABSTRACT The main purpose of this study is to analyses a complex system such as Western La Mancha Aquifer by the use of a numeric model that simulates as accurately as possible the evolution of Aquifer 23. This model is made in steady state with a thorough configuration of the system from both the geological and geometric point of view. Therefore, it is left initiated with a planned model and its structure which will be used as a real base for following transient flow simulations that also will be the foundation of subsequent prediction and exploitation analysis. The different situations that Aquifer 23 has experienced during the last decades; the solutions that have been given for them; and the possible plans that would be implemented in the future in order to deal with the changing environmental and exploitation conditions will be able to be studied using this model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reducing the energy consumption for computation and cooling in servers is a major challenge considering the data center energy costs today. To ensure energy-efficient operation of servers in data centers, the relationship among computa- tional power, temperature, leakage, and cooling power needs to be analyzed. By means of an innovative setup that enables monitoring and controlling the computing and cooling power consumption separately on a commercial enterprise server, this paper studies temperature-leakage-energy tradeoffs, obtaining an empirical model for the leakage component. Using this model, we design a controller that continuously seeks and settles at the optimal fan speed to minimize the energy consumption for a given workload. We run a customized dynamic load-synthesis tool to stress the system. Our proposed cooling controller achieves up to 9% energy savings and 30W reduction in peak power in comparison to the default cooling control scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Examples of global solutions of the shell equations are presented, such as the ones based on the well known Levy series expansion. Also discussed are some natural extensions of the Levy method as well as the inherent limitations of these methods concerning the shell model assumptions, boundary conditions and geometric regularity. Finally, some open additional design questions are noted mainly related to the simultaneous use in analysis of these global techniques and the local methods (like the finite elements) to finding the optimal shell shape, and to determining the reinforcement layout.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sight distance plays an important role in road traffic safety. Two types of Digital Elevation Models (DEMs) are utilized for the estimation of available sight distance in roads: Digital Terrain Models (DTMs) and Digital Surface Models (DSMs). DTMs, which represent the bare ground surface, are commonly used to determine available sight distance at the design stage. Additionally, the use of DSMs provides further information about elements by the roadsides such as trees, buildings, walls or even traffic signals which may reduce available sight distance. This document analyses the influence of three classes of DEMs in available sight distance estimation. For this purpose, diverse roads within the Region of Madrid (Spain) have been studied using software based on geographic information systems. The study evidences the influence of using each DEM in the outcome as well as the pros and cons of using each model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La calidad del hormigón prefabricado se determina mediante ensayos de rotura a compresión en probetas transcurridos los 28 días de curado, según establece la EHE-08. Sin embargo, en la plantas de prefabricados es necesario además saber cuándo el hormigón está listo para ser procesado (destensado, cortado, trasladado), por lo que es necesario hacer ensayos de resistencia a la compresión entre las 48 y 72 horas, este tiempo se determina a partir de la experiencia previa adquirida y depende de las condiciones de cada planta. Si las probetas no han alcanzado el valor establecido, normalmente debido a un cambio en las condiciones climatológicas o en los materiales utilizados como el tipo de cemento o agregados, la solución adoptada suele ser dejar curar el material más horas en la pista para que alcance la resistencia necesaria para ser procesado. Si sigue sin alcanzarla, lo cual sucede muy ocasionalmente, se intenta analizar cuál ha sido el motivo, pudiéndose tirar toda la producción de ese día si se comprueba que ha sido un fallo en la fabricación de la línea, y no un fallo de la probeta. Por tanto, esta metodología de control de calidad, basada en técnicas destructivas, supone dos tipos de problemas, costes y representatividad. Los métodos no destructivos que más se han aplicado para caracterizar el proceso de curado del hormigón son los ultrasónicos y la medida de la temperatura como se recoge en la bibliografía consultada. Hay diferentes modelos que permiten establecer una relación entre la temperatura y el tiempo de curado para estimar la resistencia a compresión del material, y entre la velocidad de propagación ultrasónica y la resistencia. Aunque estas relaciones no son generales, se han obtenido muy buenos resultados, ejemplo de ello es el modelo basado en la temperatura, Maturity Method, que forma parte de la norma de la ASTM C 1074 y en el mercado hay disponibles equipos comerciales (maturity meters) para medir el curado del hormigón. Además, es posible diseñar sistemas de medida de estos dos parámetros económicos y robustos; por lo cual es viable la realización de una metodología para el control de calidad del curado que pueda ser implantado en las plantas de producción de prefabricado. En este trabajo se ha desarrollado una metodología que permite estimar la resistencia a la compresión del hormigón durante el curado, la cual consta de un procedimiento para el control de calidad del prefabricado y un sistema inalámbrico de sensores para la medida de la temperatura y la velocidad ultrasónica. El procedimiento para el control de calidad permite realizar una predicción de la resistencia a compresión a partir de un modelo basado en la temperatura de curado y otros dos basados en la velocidad, método de tiempo equivalente y método lineal. El sistema inalámbrico de sensores desarrollado, WilTempUS, integra en el mismo dispositivo sensores de temperatura, humedad relativa y ultrasonidos. La validación experimental se ha realizado mediante monitorizaciones en probetas y en las líneas de prefabricados. Los resultados obtenidos con los modelos de estimación y el sistema de medida desarrollado muestran que es posible predecir la resistencia en prefabricados de hormigón en planta con errores comparables a los aceptables por norma en los ensayos de resistencia a compresión en probetas. ABSTRACT Precast concrete quality is determined by compression tests breakage on specimens after 28 days of curing, as established EHE-08. However, in the precast plants is also necessary to know when the concrete is ready to be processed (slack, cut, moved), so it is necessary to test the compressive strength between 48 and 72 hours. This time is determined from prior experience and depends on the conditions of each plant. If the samples have not reached the set value, usually due to changes in the weather conditions or in the materials used as for example the type of cement or aggregates, the solution usually adopted is to cure the material on track during more time to reach the required strength for processing. If the material still does not reach this strength, which happens very occasionally, the reason of this behavior is analyzed , being able to throw the entire production of that day if there was a failure in the manufacturing line, not a failure of the specimen. Therefore, this method of quality control, using destructive techniques, involves two kinds of problems, costs and representativeness. The most used non-destructive methods to characterize the curing process of concrete are those based on ultrasonic and temperature measurement as stated in the literature. There are different models to establish a relationship between temperature and the curing time to estimate the compressive strength of the material, and between the ultrasonic propagation velocity and the compressive strength. Although these relationships are not general, they have been very successful, for example the Maturity Method is based on the temperature measurements. This method is part of the standards established in ASTM C 1074 and there are commercial equipments available (maturity meters) in the market to measure the concrete curing. Furthermore, it is possible to design inexpensive and robust systems to measure ultrasounds and temperature. Therefore is feasible to determine a method for quality control of curing to be implanted in the precast production plants. In this work, it has been developed a methodology which allows to estimate the compressive strength of concrete during its curing process. This methodology consists of a procedure for quality control of the precast concrete and a wireless sensor network to measure the temperature and ultrasonic velocity. The procedure for quality control allows to predict the compressive strength using a model based on the curing temperature and two other models based on ultrasonic velocity, the equivalent time method and the lineal one. The wireless sensor network, WilTempUS, integrates is the same device temperature, relative humidity and ultrasonic sensors. The experimental validation has been carried out in cubic specimens and in the production plants. The results obtained with the estimation models and the measurement system developed in this thesis show that it is possible to predict the strength in precast concrete plants with errors within the limits of the standards for testing compressive strength specimens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Civil buildings are not specifically designed to support blast loads, but it is important to take into account these potential scenarios because of their catastrophic effects, on persons and structures. A practical way to consider explosions on reinforced concrete structures is necessary. With this objective we propose a methodology to evaluate blast loads on large concrete buildings, using LS-DYNA code for calculation, with Lagrangian finite elements and explicit time integration. The methodology has three steps. First, individual structural elements of the building like columns and slabs are studied, using continuum 3D elements models subjected to blast loads. In these models reinforced concrete is represented with high precision, using advanced material models such as CSCM_CONCRETE model, and segregated rebars constrained within the continuum mesh. Regrettably this approach cannot be used for large structures because of its excessive computational cost. Second, models based on structural elements are developed, using shells and beam elements. In these models concrete is represented using CONCRETE_EC2 model and segregated rebars with offset formulation, being calibrated with continuum elements models from step one to obtain the same structural response: displacement, velocity, acceleration, damage and erosion. Third, models basedon structural elements are used to develop large models of complete buildings. They are used to study the global response of buildings subjected to blast loads and progressive collapse. This article carries out different techniques needed to calibrate properly the models based on structural elements, using shells and beam elements, in order to provide results of sufficient accuracy that can be used with moderate computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During pressure testing of a large distributor the weld securing the bulkhead failed, which triggered large pressure transients and cavitation phenomena. The problem has been studied by explicit integration, using shell elements for the structural parts and acoustic elements for the water. Although the calculations had to be carried out in the absence of any information about the outcome of the accident, very good consistency was achieved between the predictions and the actual observations.