932 resultados para Iteration graphics
Resumo:
This paper purpose is to analyze one of the main problems faced by cold rolling industry of the current time, the mechanical vibration. Factors such as strips with high velocity in order to increase the productivity and thickness becoming thinner and thinner cause the vibrations to be present at all times during rolling. These market requirements also drive the industry for technology development and thus bring the challenges that the operation of a new modern equipment and more powerful. The initial purpose is to analyze the forces that cause vibration in a rolling mill type four high with two stands, where is desirable to identify the origins of these vibrational forces to make possible dismiss them or at least control its intensity, in order to prevent damage in the rolling mill and ensure product quality to the customer. For it, will be used instruments to record and store the vibrations that occur during the lamination process. With this data will be able to analyze the characteristics of the vibrations and act at your elimination. At the end of the work is expected to demonstrate how important the critical view of the engineer in the analysis of graphics combined with the calculations of the natural vibration frequency and engagement of key parts of the laminator. With these two tools at hand, will be possible to increase the productivity of the rolling mill and act preventively in maintenance, thereby reducing your downtime and increasing its performance and efficiency
Resumo:
The theme of the writing learning in childhood education has conquered space in academic works, which occupy in analyze and understand how this process in contexts of meaning and not more before the mechanics of learning letters. Thus, in this work, it constitutes general objective to investigate whether and how the practice of teaching of oral and written expression in the last year of child education promotes the development of the symbol (sign) in children, so necessary to the learning of writing from inside to outside. The questions that served as the North for the study were the following: the teaching of writing, in the last year of child education, is organized to promote the development of the symbol (sign) in children? Which and how the ratings graphics made by children demarcate the stage of development of their writing? For its implementation, a bibliographic research was performed, using as a theoretical support for the prospect of Psychology socio-historical, especially authors as Vygotsky (1984), Luria (2001), Oliveira (1995) and Mello (2005). In addition to this was performed the empirical research of qualitative approach, according to his purpose. The data were collected by six instruments: systematic observation and direct in the classroom; audio-recording; records in a field diary, annual teaching plan of the school; lesson plan weekly and material produced by children, result of write activities. By means of the analysis of these data, we note that the writing has been worked from outside to inside, as an imposition, practice that opposes the theory of Vygotsky, already that the writing has been tackled in a way mechanical, in contexts not significant, with focused practices to the trace of the letters and copy of words, exclusive to the notion of symbol. This result points to the emergence of changes in education of writing in childhood education, without which becomes inglorious the search for a pedagogy of writing aware and...
Resumo:
The broad goals of verifiable visualization rely on correct algorithmic implementations. We extend a framework for verification of isosurfacing implementations to check topological properties. Specifically, we use stratified Morse theory and digital topology to design algorithms which verify topological invariants. Our extended framework reveals unexpected behavior and coding mistakes in popular publicly available isosurface codes.
Resumo:
Scorpion toxins targeting voltage-gated sodium (NaV) channels are peptides that comprise 6076 amino acid residues cross-linked by four disulfide bridges. These toxins can be divided in two groups (a and beta toxins), according to their binding properties and mode of action. The scorpion a-toxin Ts2, previously described as a beta-toxin, was purified from the venom of Tityus serrulatus, the most dangerous Brazilian scorpion. In this study, seven mammalian NaV channel isoforms (rNaV1.2, rNaV1.3, rNaV1.4, hNaV1.5, mNaV1.6, rNaV1.7 and rNaV1.8) and one insect NaV channel isoform (DmNaV1) were used to investigate the subtype specificity and selectivity of Ts2. The electrophysiology assays showed that Ts2 inhibits rapid inactivation of NaV1.2, NaV1.3, NaV1.5, NaV1.6 and NaV1.7, but does not affect NaV1.4, NaV1.8 or DmNaV1. Interestingly, Ts2 significantly shifts the voltage dependence of activation of NaV1.3 channels. The 3D structure of this toxin was modeled based on the high sequence identity (72%) shared with Ts1, another T. serrulatus toxin. The overall fold of the Ts2 model consists of three beta-strands and one a-helix, and is arranged in a triangular shape forming a cysteine-stabilized a-helix/beta-sheet (CSa beta) motif.
Resumo:
Electrical impedance tomography (EIT) is an imaging technique that attempts to reconstruct the impedance distribution inside an object from the impedance between electrodes placed on the object surface. The EIT reconstruction problem can be approached as a nonlinear nonconvex optimization problem in which one tries to maximize the matching between a simulated impedance problem and the observed data. This nonlinear optimization problem is often ill-posed, and not very suited to methods that evaluate derivatives of the objective function. It may be approached by simulated annealing (SA), but at a large computational cost due to the expensive evaluation process of the objective function, which involves a full simulation of the impedance problem at each iteration. A variation of SA is proposed in which the objective function is evaluated only partially, while ensuring boundaries on the behavior of the modified algorithm.
Resumo:
Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements.
Resumo:
This paper studies the average control problem of discrete-time Markov Decision Processes (MDPs for short) with general state space, Feller transition probabilities, and possibly non-compact control constraint sets A(x). Two hypotheses are considered: either the cost function c is strictly unbounded or the multifunctions A(r)(x) = {a is an element of A(x) : c(x, a) <= r} are upper-semicontinuous and compact-valued for each real r. For these two cases we provide new results for the existence of a solution to the average-cost optimality equality and inequality using the vanishing discount approach. We also study the convergence of the policy iteration approach under these conditions. It should be pointed out that we do not make any assumptions regarding the convergence and the continuity of the limit function generated by the sequence of relative difference of the alpha-discounted value functions and the Poisson equations as often encountered in the literature. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures.
Resumo:
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.
Resumo:
Facial reconstruction is a method that seeks to recreate a person's facial appearance from his/her skull. This technique can be the last resource used in a forensic investigation, when identification techniques such as DNA analysis, dental records, fingerprints and radiographic comparison cannot be used to identify a body or skeletal remains. To perform facial reconstruction, the data of facial soft tissue thickness are necessary. Scientific literature has described differences in the thickness of facial soft tissue between ethnic groups. There are different databases of soft tissue thickness published in the scientific literature. There are no literature records of facial reconstruction works carried out with data of soft tissues obtained from samples of Brazilian subjects. There are also no reports of digital forensic facial reconstruction performed in Brazil. There are two databases of soft tissue thickness published for the Brazilian population: one obtained from measurements performed in fresh cadavers (fresh cadavers' pattern), and another from measurements using magnetic resonance imaging (Magnetic Resonance pattern). This study aims to perform three different characterized digital forensic facial reconstructions (with hair, eyelashes and eyebrows) of a Brazilian subject (based on an international pattern and two Brazilian patterns for soft facial tissue thickness), and evaluate the digital forensic facial reconstructions comparing them to photos of the individual and other nine subjects. The DICOM data of the Computed Tomography (CT) donated by a volunteer were converted into stereolitography (STL) files and used for the creation of the digital facial reconstructions. Once the three reconstructions were performed, they were compared to photographs of the subject who had the face reconstructed and nine other subjects. Thirty examiners participated in this recognition process. The target subject was recognized by 26.67% of the examiners in the reconstruction performed with the Brazilian Magnetic Resonance Pattern, 23.33% in the reconstruction performed with the Brazilian Fresh Cadavers Pattern and 20.00% in the reconstruction performed with the International Pattern, in which the target-subject was the most recognized subject in the first two patterns. The rate of correct recognitions of the target subject indicate that the digital forensic facial reconstruction, conducted with parameters used in this study, may be a useful tool. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
Selective modulation of liver X receptor beta (LXR beta) has been recognized as an important approach to prevent or reverse the atherosclerotic process. In the present work, we have developed robust conformation-independent fragment-based quantitative structure-activity and structure-selectivity relationship models for a series of quinolines and cinnolines as potent modulators of the two LXR sub-types. The generated models were then used to predict the potency of an external test set and the predicted values were in good agreement with the experimental results, indicating the potential of the models for untested compounds. The final 2D molecular recognition patterns obtained were integrated to 3D structure-based molecular modeling studies to provide useful insights into the chemical and structural determinants for increased LXR beta binding affinity and selectivity. (C) 2011 Elsevier Inc. All rights reserved.
The boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems
Resumo:
Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Resumo:
Portable system of energy dispersive X-ray fluorescence was used to determine the elemental composition of 68 pottery fragments from Sambaqui do Bacanga, an archeological site in Sao Luis, Maranhao, Brazil. This site was occupied from 6600 BP until 900 BP. By determining the element chemical composition of those fragments, it was possible to verify the existence of engobe in 43 pottery fragments. Obtained from two-dimensional graphs and hierarchical cluster analysis performed in fragments of stratigraphies from surface and 113-cm level, and 10 to 20, 132 and 144-cm level, it was possible to group these fragments in five distinct groups, according to their stratigraphies. The results of data grouping (two-dimensional graphics) are in agreement with hierarchical cluster analysis by Ward method. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Objetivo: desenvolver um ambiente virtual de aprendizagem (AVA) para alunos do ensino fundamental sobre síndromes genéticas. Método: o AVA, conhecido como Cybertutor, possibilita o aprendizado do aluno pela internet de forma interativa. A metodologia deste estudo foi composta de duas etapas, a de desenvolvimento e a de disponibilização do AVA. O desenvolvimento do conteúdo educacional, gráfico e audiovisual do Cybertutor contou com o auxílio de um geneticista do HRAC/ USP e de informações científicas disponibilizadas em livros, artigos, teses e dissertações nacionais e internacionais. O Cybertutor foi disponibilizado na plataforma do Projeto Jovem Doutor (http://www. jovemdoutor.org.br/jdr/) pela equipe técnica da DTM/FMUSP. Resultados: o Cybertutor elaborado possibilitou estruturar o conteúdo educacional, gráfico e audiovisual em tópicos, inserir questões de reforço, lista de discussão e verificar o desempenho dos alunos. Conclusão: o AVA desenvolvido pode ser uma importante ferramenta de educação em saúde em Síndromes Genéticas, abrangendo as mais diversas regiões do país.