49 resultados para computational algebra
Resumo:
Multilayered, counterflow, parallel-plate heat exchangers are analyzed numerically and theoretically. The analysis, carried out for constant property fluids, considers a hydrodynamically developed laminar flow and neglects longitudinal conduction both in the fluid and in the plates. The solution for the temperature field involves eigenfunction expansions that can be solved in terms of Whittaker functions using standard symbolic algebra packages, leading to analytical expressions that provide the eigenvalues numerically. It is seen that the approximate solution obtained by retaining the first two modes in the eigenfunction expansion provides an accurate representation for the temperature away from the entrance regions, specially for long heat exchangers, thereby enabling simplified expressions for the wall and bulk temperatures, local heat-transfer rate, overall heat-transfer coefficient, and outlet bulk temperatures. The agreement between the numerical and theoretical results suggests the possibility of using the analytical solutions presented herein as benchmark problems for computational heat-transfer codes.
Resumo:
This paper is devoted to the numerical analysis of bidimensional bonded lap joints. For this purpose, the stress singularities occurring at the intersections of the adherend-adhesive interfaces with the free edges are first investigated and a method for computing both the order and the intensity factor of these singularities is described briefly. After that, a simplified model, in which the adhesive domain is reduced to a line, is derived by using an asymptotic expansion method. Then, assuming that the assembly debonding is produced by a macro-crack propagation in the adhesive, the associated energy release rate is computed. Finally, a homogenization technique is used in order to take into account a preliminary adhesive damage consisting of periodic micro-cracks. Some numerical results are presented.
Resumo:
This work describes an experience with a methodology for learning based on competences in Linear Algebra for engineering students. The experience has been based in autonomous team work of students. DERIVE tutorials for Linear Algebra topics are provided to the students. They have to work with the tutorials as their homework. After, worksheets with exercises have been prepared to be solved by the students organized in teams, using DERIVE function previously defined in the tutorials. The students send to the instructor the solution of the proposed exercises and they fill a survey with their impressions about the following items: ease of use of the files, usefulness of the tutorials for understanding the mathematical topics and the time spent in the experience. As a final work, we have designed an activity directed to the interested students. They have to prepare a project, related with a real problem in Science and Engineering. The students are free to choose the topic and to develop it but they have to use DERIVE in the solution. Obviously they are guided by the instructor. Some examples of activities related with Orthogonal Transformations will be presented.
Resumo:
A toolbox is a set of procedures taking advantage of the computing power and graphical capacities of a CAS. With these procedures the students can solve math problems, apply mathematics to engineering or simply reinforce the learning of certain mathematical concepts. From the point of view of their construction, we can consider two types of toolboxes: (i) the closed box, built by the teacher, in which the utility files are provided to the students together with the respective tutorials and several worksheets with proposed exercises and problems,
Resumo:
The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.
Resumo:
Experimental diffusion data were critically assessed to develop the atomic mobility for the bcc phase of the Ti–Al–Fe system by using the DICTRA software. Good agreements were obtained from comprehensive comparisons made between the calculated and the experimental diffusion coefficients. The developed atomic mobility was then validated by well predicting the interdiffusion behavior observed from the diffusion-couple experiments in available literature.
Resumo:
Services in smart environments pursue to increase the quality of people?s lives. The most important issues when developing this kind of environments is testing and validating such services. These tasks usually imply high costs and annoying or unfeasible real-world testing. In such cases, artificial societies may be used to simulate the smart environment (i.e. physical environment, equipment and humans). With this aim, the CHROMUBE methodology guides test engineers when modeling human beings. Such models reproduce behaviors which are highly similar to the real ones. Originally, these models are based on automata whose transitions are governed by random variables. Automaton?s structure and the probability distribution functions of each random variable are determined by a manual test and error process. In this paper, it is presented an alternative extension of this methodology which avoids the said manual process. It is based on learning human behavior patterns automatically from sensor data by using machine learning techniques. The presented approach has been tested on a real scenario, where this extension has given highly accurate human behavior models,
Resumo:
The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.
Resumo:
Nonlinear analysis tools for studying and characterizing the dynamics of physiological signals have gained popularity, mainly because tracking sudden alterations of the inherent complexity of biological processes might be an indicator of altered physiological states. Typically, in order to perform an analysis with such tools, the physiological variables that describe the biological process under study are used to reconstruct the underlying dynamics of the biological processes. For that goal, a procedure called time-delay or uniform embedding is usually employed. Nonetheless, there is evidence of its inability for dealing with non-stationary signals, as those recorded from many physiological processes. To handle with such a drawback, this paper evaluates the utility of non-conventional time series reconstruction procedures based on non uniform embedding, applying them to automatic pattern recognition tasks. The paper compares a state of the art non uniform approach with a novel scheme which fuses embedding and feature selection at once, searching for better reconstructions of the dynamics of the system. Moreover, results are also compared with two classic uniform embedding techniques. Thus, the goal is comparing uniform and non uniform reconstruction techniques, including the one proposed in this work, for pattern recognition in biomedical signal processing tasks. Once the state space is reconstructed, the scheme followed characterizes with three classic nonlinear dynamic features (Largest Lyapunov Exponent, Correlation Dimension and Recurrence Period Density Entropy), while classification is carried out by means of a simple k-nn classifier. In order to test its generalization capabilities, the approach was tested with three different physiological databases (Speech Pathologies, Epilepsy and Heart Murmurs). In terms of the accuracy obtained to automatically detect the presence of pathologies, and for the three types of biosignals analyzed, the non uniform techniques used in this work lightly outperformed the results obtained using the uniform methods, suggesting their usefulness to characterize non-stationary biomedical signals in pattern recognition applications. On the other hand, in view of the results obtained and its low computational load, the proposed technique suggests its applicability for the applications under study.
Resumo:
Traumatic brain injury and spinal cord injury have recently been put under the spotlight as major causes of death and disability in the developed world. Despite the important ongoing experimental and modeling campaigns aimed at understanding the mechanics of tissue and cell damage typically observed in such events, the differentiated roles of strain, stress and their corresponding loading rates on the damage level itself remain unclear. More specifically, the direct relations between brain and spinal cord tissue or cell damage, and electrophysiological functions are still to be unraveled. Whereas mechanical modeling efforts are focusing mainly on stress distribution and mechanistic-based damage criteria, simulated function-based damage criteria are still missing. Here, we propose a new multiscale model of myelinated axon associating electrophysiological impairment to structural damage as a function of strain and strain rate. This multiscale approach provides a new framework for damage evaluation directly relating neuron mechanics and electrophysiological properties, thus providing a link between mechanical trauma and subsequent functional deficits
Resumo:
When applying computational mathematics in practical applications, even though one may be dealing with a problem that can be solved algorithmically, and even though one has good algorithms to approach the solution, it can happen, and often it is the case, that the problem has to be reformulated and analyzed from a different computational point of view. This is the case of the development of approximate algorithms. This paper frames in the research area of approximate algebraic geometry and commutative algebra and, more precisely, on the problem of the approximate parametrization.
Resumo:
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. In robotics a similar role has been played by modules that fit point cloud data to the superquadric family of shapes and its various extensions. We developed a model of shape tuning in AIP based on cosine tuning to superquadric parameters. However, the model did not fit the data well, and we also found that it was difficult to accurately reproduce these parameters using neural networks with the appropriate inputs (modelled on the caudal intraparietal area, CIP). The latter difficulty was related to the fact that there are large discontinuities in the superquadric parameters between very similar shapes. To address these limitations we adopted an alternative shape parameterization based on an Isomap nonlinear dimension reduction. The Isomap was built using gradients and curvatures of object surface depth. This alternative parameterization was low-dimensional (like superquadrics), but data-driven (similar to an alternative clustering approach that is also sometimes used in robotics) and lacked large discontinuities. Isomaps with 16 or more dimensions reproduced the AIP data fairly well. Moreover, we found that the Isomap parameters could be approximated from CIP-like input much more accurately than the superquadric parameters. We conclude that Isomaps, or perhaps alternative dimension reductions of CIP signals, provide a promising model of AIP tuning. We have now started to integrate our model with a robot hand, to explore the efficacy of Isomap shape reductions in grasp planning. Future work will consider dynamics of spike responses and integration with related visual and motor area models.
Resumo:
Reproducible research in scientific workflows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and final results, improves understanding, and permits replaying a workflow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We define a process for documenting the workflow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation using a real workflow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predefined virtual machine image on both computing platforms.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
The initial step in most facial age estimation systems consists of accurately aligning a model to the output of a face detector (e.g. an Active Appearance Model). This fitting process is very expensive in terms of computational resources and prone to get stuck in local minima. This makes it impractical for analysing faces in resource limited computing devices. In this paper we build a face age regressor that is able to work directly on faces cropped using a state-of-the-art face detector. Our procedure uses K nearest neighbours (K-NN) regression with a metric based on a properly tuned Fisher Linear Discriminant Analysis (LDA) projection matrix. On FG-NET we achieve a state-of-the-art Mean Absolute Error (MAE) of 5.72 years with manually aligned faces. Using face images cropped by a face detector we get a MAE of 6.87 years in the same database. Moreover, most of the algorithms presented in the literature have been evaluated on single database experiments and therefore, they report optimistically biased results. In our cross-database experiments we get a MAE of roughly 12 years, which would be the expected performance in a real world application.