801 resultados para Computational Thinking
Resumo:
In a crosswind scenario, the risk of high-speed trains overturning increases when they run on viaducts since the aerodynamic loads are higher than on the ground. In order to increase safety, vehicles are sheltered by fences that are installed on the viaduct to reduce the loads experienced by the train. Windbreaks can be designed to have different heights, and with or without eaves on the top. In this paper, a parametric study with a total of 12 fence designs was carried out using a two-dimensional model of a train standing on a viaduct. To asses the relative effectiveness of sheltering devices, tests were done in a wind tunnel with a scaled model at a Reynolds number of 1 × 105, and the train’s aerodynamic coefficients were measured. Experimental results were compared with those predicted by Unsteady Reynolds-averaged Navier-Stokes (URANS) simulations of flow, showing that a computational model is able to satisfactorily predict the trend of the aerodynamic coefficients. In a second set of tests, the Reynolds number was increased to 12 × 106 (at a free flow air velocity of 30 m/s) in order to simulate strong wind conditions. The aerodynamic coefficients showed a similar trend for both Reynolds numbers; however, their numerical value changed enough to indicate that simulations at the lower Reynolds number do not provide all required information. Furthermore, the variation of coefficients in the simulations allowed an explanation of how fences modified the flow around the vehicle to be proposed. This made it clear why increasing fence height reduced all the coefficients but adding an eave had an effect mainly on the lift force coefficient. Finally, by analysing the time signals it was possible to clarify the influence of the Reynolds number on the peak-to-peak amplitude, the time period and the Strouhal number.
Resumo:
In the intricate maturation process of [NiFe]-hydrogenases, the Fe(CN)2CO cofactor is first assembled in a HypCD complex with iron coordinated by cysteines from both proteins and CO is added after ligation of cyanides. The small accessory protein HypC is known to play a role in delivering the cofactor needed for assembling the hydrogenase active site. However, the chemical nature of the Fe(CN)2CO moiety and the stability of the cofactor–HypC complex are open questions. In this work, we address geometries, properties, and the nature of bonding of all chemical species involved in formation and binding of the cofactor by means of quantum calculations. We also study the influence of environmental effects and binding to cysteines on vibrational frequencies of stretching modes of CO and CN used to detect the presence of Fe(CN)2CO. Carbon monoxide is found to be much more sensitive to sulfur binding and the polarity of the medium than cyanides. The stability of the HypC–cofactor complex is analyzed by means of molecular dynamics simulation of cofactor-free and cofactor-bound forms of HypC. The results show that HypC is stable enough to carry the cofactor, but since its binding cysteine is located at the N-terminal unstructured tail, it presents large motions in solution, which suggests the need for a guiding interaction to achieve delivery of the cofactor.
Resumo:
Computational homogenization by means of the finite element analysis of a representative volume element of the microstructure is used to simulate the deformation of nanostructured Ti. The behavior of each grain is taken into account using a single crystal elasto-viscoplastic model which includes the microscopic mechanisms of plastic deformation by slip along basal, prismatic and pyramidal systems. Two different representations of the polycrystal were used. Each grain was modeled with one cubic finite element in the first one while many cubic elements were used to represent each grain in the second one, leading to a model which includes the effect of grain shape and size in a limited number of grains due to the computational cost. Both representations were used to simulate the tensile deformation of nanostructured Ti processed by ECAP-C as well as the drawing process of nanostructured Ti billets. It was found that the first representation based in one finite element per grain led to a stiffer response in tension and was not able to predict the texture evolution during drawing because the strain gradient within each grain could not be captured. On the contrary, the second representation of the polycrystal microstructure with many finite elements per grain was able to predict accurately the deformation of nanostructured Ti.
Resumo:
The potential shown by Lean in different domains has aroused interest in the software industry. However, it remains unclear how Lean can be effectively applied in a domain such as software development that is fundamentally different from manufacturing. This study explores how Lean principles are implemented in software development companies and the challenges that arise when applying Lean Software Development. For that, a case study was conducted at Ericsson R&D Finland, which successfully adopted Scrum in 2009 and subsequently started a comprehensible transition to Lean in 2010. Focus groups were conducted with company representatives to help devise a questionnaire supporting the creation of a Lean mindset in the company (Team Amplifier). Afterwards, the questionnaire was used in 16 teams based in Finland, Hungary and China to evaluate the status of the transformation. By using Lean thinking, Ericsson R&D Finland has made important improvements to the quality of its products, customer satisfaction and transparency within the organization. Moreover, build times have been reduced over ten times and the number of commits per day has increased roughly five times.The study makes two main contributions to research. First, the main factors that have enabled Ericsson R&D?s achievements are analysed. Elements such as ?network of product owners?, ?continuous integration?, ?work in progress limits? and ?communities of practice? have been identified as being of fundamental importance. Second, three categories of challenges in using Lean Software Development were identified: ?achieving flow?, ?transparency? and ?creating a learning culture?
Resumo:
In this paper, the foundations of the beta method, widely used in todays ship appendage extrapolations, are explored. The present work pretends to validate the Beta Method using experimental and computational tools. The ship used is a rounded bow tugboat with two significant appendages, namely, a midship protective structure for the propulsion system and a stern keel. The experimental and computational data was obtained through Towing Tank trials and a RANSE CFD code, respectively.
Resumo:
Reproducible research in scientic work ows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and nal results, improves understanding, and permits replaying a work ow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We dene a process for documenting the work ow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation sing a real work ow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predened virtual machine image on both computing platforms.
Resumo:
This paper is devoted to the numerical analysis of bidimensional bonded lap joints. For this purpose, the stress singularities occurring at the intersections of the adherend-adhesive interfaces with the free edges are first investigated and a method for computing both the order and the intensity factor of these singularities is described briefly. After that, a simplified model, in which the adhesive domain is reduced to a line, is derived by using an asymptotic expansion method. Then, assuming that the assembly debonding is produced by a macro-crack propagation in the adhesive, the associated energy release rate is computed. Finally, a homogenization technique is used in order to take into account a preliminary adhesive damage consisting of periodic micro-cracks. Some numerical results are presented.
Resumo:
El estudio desarrollado en este trabajo de tesis se centra en la modelización numérica de la fase de propagación de los deslizamientos rápidos de ladera a través del método sin malla Smoothed Particle Hydrodynamics (SPH). Este método tiene la gran ventaja de permitir el análisis de problemas de grandes deformaciones evitando operaciones costosas de remallado como en el caso de métodos numéricos con mallas tal como el método de los Elementos Finitos. En esta tesis, particular atención viene dada al rol que la reología y la presión de poros desempeñan durante estos eventos. El modelo matemático utilizado se basa en la formulación de Biot-Zienkiewicz v - pw, que representa el comportamiento, expresado en términos de velocidad del esqueleto sólido y presiones de poros, de la mezcla de partículas sólidas en un medio saturado. Las ecuaciones que gobiernan el problema son: • la ecuación de balance de masa de la fase del fluido intersticial, • la ecuación de balance de momento de la fase del fluido intersticial y de la mezcla, • la ecuación constitutiva y • una ecuación cinemática. Debido a sus propiedades geométricas, los deslizamientos de ladera se caracterizan por tener una profundidad muy pequeña frente a su longitud y a su anchura, y, consecuentemente, el modelo matemático mencionado anteriormente se puede simplificar integrando en profundidad las ecuaciones, pasando de un modelo 3D a 2D, el cual presenta una combinación excelente de precisión, sencillez y costes computacionales. El modelo propuesto en este trabajo se diferencia de los modelos integrados en profundidad existentes por incorporar un ulterior modelo capaz de proveer información sobre la presión del fluido intersticial a cada paso computacional de la propagación del deslizamiento. En una manera muy eficaz, la evolución de los perfiles de la presión de poros está numéricamente resuelta a través de un esquema explicito de Diferencias Finitas a cada nodo SPH. Este nuevo enfoque es capaz de tener en cuenta la variación de presión de poros debida a cambios de altura, de consolidación vertical o de cambios en las tensiones totales. Con respecto al comportamiento constitutivo, uno de los problemas principales al modelizar numéricamente deslizamientos rápidos de ladera está en la dificultad de simular con la misma ley constitutiva o reológica la transición de la fase de iniciación, donde el material se comporta como un sólido, a la fase de propagación donde el material se comporta como un fluido. En este trabajo de tesis, se propone un nuevo modelo reológico basado en el modelo viscoplástico de Perzyna, pensando a la viscoplasticidad como a la llave para poder simular tanto la fase de iniciación como la de propagación con el mismo modelo constitutivo. Con el fin de validar el modelo matemático y numérico se reproducen tanto ejemplos de referencia con solución analítica como experimentos de laboratorio. Finalmente, el modelo se aplica a casos reales, con especial atención al caso del deslizamiento de 1966 en Aberfan, mostrando como los resultados obtenidos simulan con éxito estos tipos de riesgos naturales. The study developed in this thesis focuses on the modelling of landslides propagation with the Smoothed Particle Hydrodynamics (SPH) meshless method which has the great advantage of allowing to deal with large deformation problems by avoiding expensive remeshing operations as happens for mesh methods such as, for example, the Finite Element Method. In this thesis, special attention is given to the role played by rheology and pore water pressure during these natural hazards. The mathematical framework used is based on the v - pw Biot-Zienkiewicz formulation, which represents the behaviour, formulated in terms of soil skeleton velocity and pore water pressure, of the mixture of solid particles and pore water in a saturated media. The governing equations are: • the mass balance equation for the pore water phase, • the momentum balance equation for the pore water phase and the mixture, • the constitutive equation and • a kinematic equation. Landslides, due to their shape and geometrical properties, have small depths in comparison with their length or width, therefore, the mathematical model aforementioned can then be simplified by depth integrating the equations, switching from a 3D to a 2D model, which presents an excellent combination of accuracy, computational costs and simplicity. The proposed model differs from previous depth integrated models by including a sub-model able to provide information on pore water pressure profiles at each computational step of the landslide's propagation. In an effective way, the evolution of the pore water pressure profiles is numerically solved through a set of 1D Finite Differences explicit scheme at each SPH node. This new approach is able to take into account the variation of the pore water pressure due to changes of height, vertical consolidation or changes of total stress. Concerning the constitutive behaviour, one of the main issues when modelling fast landslides is the difficulty to simulate with the same constitutive or rheological model the transition from the triggering phase, where the landslide behaves like a solid, to the propagation phase, where the landslide behaves in a fluid-like manner. In this work thesis, a new rheological model is proposed, based on the Perzyna viscoplastic model, thinking of viscoplasticity as the key to close the gap between the triggering and the propagation phase. In order to validate the mathematical model and the numerical approach, benchmarks and laboratory experiments are reproduced and compared to analytical solutions when possible. Finally, applications to real cases are studied, with particular attention paid to the Aberfan flowslide of 1966, showing how the mathematical model accurately and successfully simulate these kind of natural hazards.
Resumo:
This paper focuses on examples of educational tools concerning the learning of chemistry for engineering students through different daily life cases. These tools were developed during the past few years for enhancing the active role of students. They refer to cases about mineral water, medicaments, dentifrices and informative panels about solar power, where an adequate quantitative treatment through stoichiometry calculations allows the interpretation of data and values announced by manufacturers. These cases were developed in the context of an inquiry-guided instruction model. By bringing tangible chemistry examples into the classroom we provide an opportunity for engineering students to apply this science to familiar products in hopes that they will appreciate chemistry more, will be motivated to study concepts in greater detail, and will connect the relevance of chemistry to everyday life.
Resumo:
The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.
Resumo:
Experimental diffusion data were critically assessed to develop the atomic mobility for the bcc phase of the Ti–Al–Fe system by using the DICTRA software. Good agreements were obtained from comprehensive comparisons made between the calculated and the experimental diffusion coefficients. The developed atomic mobility was then validated by well predicting the interdiffusion behavior observed from the diffusion-couple experiments in available literature.
Resumo:
Services in smart environments pursue to increase the quality of people?s lives. The most important issues when developing this kind of environments is testing and validating such services. These tasks usually imply high costs and annoying or unfeasible real-world testing. In such cases, artificial societies may be used to simulate the smart environment (i.e. physical environment, equipment and humans). With this aim, the CHROMUBE methodology guides test engineers when modeling human beings. Such models reproduce behaviors which are highly similar to the real ones. Originally, these models are based on automata whose transitions are governed by random variables. Automaton?s structure and the probability distribution functions of each random variable are determined by a manual test and error process. In this paper, it is presented an alternative extension of this methodology which avoids the said manual process. It is based on learning human behavior patterns automatically from sensor data by using machine learning techniques. The presented approach has been tested on a real scenario, where this extension has given highly accurate human behavior models,
Resumo:
The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
Traumatic brain injury and spinal cord injury have recently been put under the spotlight as major causes of death and disability in the developed world. Despite the important ongoing experimental and modeling campaigns aimed at understanding the mechanics of tissue and cell damage typically observed in such events, the differentiated roles of strain, stress and their corresponding loading rates on the damage level itself remain unclear. More specifically, the direct relations between brain and spinal cord tissue or cell damage, and electrophysiological functions are still to be unraveled. Whereas mechanical modeling efforts are focusing mainly on stress distribution and mechanistic-based damage criteria, simulated function-based damage criteria are still missing. Here, we propose a new multiscale model of myelinated axon associating electrophysiological impairment to structural damage as a function of strain and strain rate. This multiscale approach provides a new framework for damage evaluation directly relating neuron mechanics and electrophysiological properties, thus providing a link between mechanical trauma and subsequent functional deficits