967 resultados para graphic computation
Resumo:
The employment of flexibility in the design of façades makes them adaptable to adverse weather conditions, resulting in both minimization of environmental discomfort and improvement of energy efficiency. The present study highlights the potential of flexible façades as a resource to reduce rigidity and form repetition, which are usually employed in condominiums of standardized houses; as such, the work presented herein contributes to field of study of architectural projects strategies for adapting and integrating buildings within the local climate context. Two façade options were designed using as reference the bionics and the kinetics, as well as their applications to architectural constructions. This resulted in two lightweight and dynamic structures, which cater to constraints of comfort through combinations of movements, which control the impact of solar radiation and of cooling in the environment. The efficacy and technical functionality of the façades were tested with comfort analysis and graphic computation software, as well as with physical models. Thus, the current research contributes to the improvement of architectural solutions aimed at using passive energy strategies in order to offer both better quality for the users and for the sustainability of the planet
Resumo:
Currently there is still a high demand for quality control in manufacturing processes of mechanical parts. This keeps alive the need for the inspection activity of final products ranging from dimensional analysis to chemical composition of products. Usually this task may be done through various nondestructive and destructive methods that ensure the integrity of the parts. The result generated by these modern inspection tools ends up not being able to geometrically define the real damage and, therefore, cannot be properly displayed on a computing environment screen. Virtual 3D visualization may help identify damage that would hardly be detected by any other methods. One may find some commercial softwares that seek to address the stages of a design and simulation of mechanical parts in order to predict possible damages trying to diminish potential undesirable events. However, the challenge of developing softwares capable of integrating the various design activities, product inspection, results of non-destructive testing as well as the simulation of damage still needs the attention of researchers. This was the motivation to conduct a methodological study for implementation of a versatile CAD/CAE computer kernel capable of helping programmers in developing softwares applied to the activities of design and simulation of mechanics parts under stress. In this research it is presented interesting results obtained from the use of the developed kernel showing that it was successfully applied to case studies of design including parts presenting specific geometries, namely: mechanical prostheses, heat exchangers and piping of oil and gas. Finally, the conclusions regarding the experience of merging CAD and CAE theories to develop the kernel, so as to result in a tool adaptable to various applications of the metalworking industry are presented
Resumo:
Purpose: The aim of this study was to compare splinting techniques for impression copings of osseointegrated implants with different angulations.Materials and Methods: Replicas (N = 24) of a metal matrix (control) containing two implants at 90 degrees and 65 degrees in relation to the horizontal surface were obtained by using four impression techniques: Technique 1 (T1), direct technique with square copings without union in open trays; Technique 2 (T2), square copings splinted with dental floss and autopolymerizing acrylic resin; Technique 3 (T3), square copings splinted with dental floss and autopolymerizing acrylic resin, sectioned and splinted again with autopolymerizing acrylic resin; Technique 4 (T4), square copings splinted with prefabricated acrylic resin bar. The impression material was polyether. The replicas were individually scanned to capture the images, which were assessed in a graphic computation program. The program allowed the angulation between the bases of the replicas and the reading screws to be measured. The images of the replicas were compared with the matrix image (control), and the differences in angulations from the control image were calculated. The analysis of variance and the Tukey test for comparisons (p < 0.05) were used for statistical analysis.Results: All groups showed significant differences in the implant angulations in comparison with the control group (p < 0.05). Group T1 showed the highest difference (1.019 degrees) followed by groups T2 (0.747 degrees), T3 (0.516 degrees), and T4 (0.325 degrees), which showed the lowest angular alteration compared to the control group. There were significant differences between inclined and straight implants in all the groups, except in group T4.Conclusions: Based on the results, the splinting of pick-up impression copings is indicated for osseointegrated implant impressions. The square copings splinted with a prefabricated acrylic resin bar presented the best results among the pick-up impression techniques evaluated in this study.
Resumo:
Purpose: The objective of this study was to evaluate and compare 3 impression techniques for osseointegrated implant transfer procedures.Materials and Methods: (1) Group Splinted with Acrylic Resin (SAR), impression with square copings splinted with prefabricated autopolymerizing acrylic resin bar; (2) Group Splinted with Light-Curing Resin (SLR), impression, with square copings splinted with prefabricated light-curing composite resin bar; (3). Group Independent Air-abraded (IAA), impression with independent square coping aluminum oxide air-abraded. Impression procedures were performed with polyether material, and the data obtained was compared with a control group. These were characterized by metal matrix (MM) measurement values of the implants inclination positions at 90 and 05 degrees in relation to the matrix surface. Readings of analogs and implant inclinations were assessed randomly through graphic computation AutoCAD software. Experimental groups angular deviation with MM were submitted to analysis of variance and means were compared through Tukey's test (P < 0.05).Results: There was no statistical significant difference between SAR and SLR experimental groups and MM for vertical and angulated implants. Group IAA presented a statistically significant difference for angulated implants.Conclusion: It was concluded within the limitations of this study, that SAR and SLR produced more accurate casts than IAA technique, which presented inferior results.
Resumo:
This piece of work focuses the application of a computational method to obtain clinographic maps, by using C.K. Wentworth's method. The developed program to construct the clinographic maps is completely structured in Basic Language and it uses Line PC Micro-Computers. As an exercise for practical application, in this study we defined a certain region, where the topographic base map is that of Brazil, 1:250 000 scale, Marilia contour map, having as the study area the region near the city of Assis (SP). -English summary
Resumo:
The employment of flexibility in the design of façades makes them adaptable to adverse weather conditions, resulting in both minimization of environmental discomfort and improvement of energy efficiency. The present study highlights the potential of flexible façades as a resource to reduce rigidity and form repetition, which are usually employed in condominiums of standardized houses; as such, the work presented herein contributes to field of study of architectural projects strategies for adapting and integrating buildings within the local climate context. Two façade options were designed using as reference the bionics and the kinetics, as well as their applications to architectural constructions. This resulted in two lightweight and dynamic structures, which cater to constraints of comfort through combinations of movements, which control the impact of solar radiation and of cooling in the environment. The efficacy and technical functionality of the façades were tested with comfort analysis and graphic computation software, as well as with physical models. Thus, the current research contributes to the improvement of architectural solutions aimed at using passive energy strategies in order to offer both better quality for the users and for the sustainability of the planet
Resumo:
The employment of flexibility in the design of façades makes them adaptable to adverse weather conditions, resulting in both minimization of environmental discomfort and improvement of energy efficiency. The present study highlights the potential of flexible façades as a resource to reduce rigidity and form repetition, which are usually employed in condominiums of standardized houses; as such, the work presented herein contributes to field of study of architectural projects strategies for adapting and integrating buildings within the local climate context. Two façade options were designed using as reference the bionics and the kinetics, as well as their applications to architectural constructions. This resulted in two lightweight and dynamic structures, which cater to constraints of comfort through combinations of movements, which control the impact of solar radiation and of cooling in the environment. The efficacy and technical functionality of the façades were tested with comfort analysis and graphic computation software, as well as with physical models. Thus, the current research contributes to the improvement of architectural solutions aimed at using passive energy strategies in order to offer both better quality for the users and for the sustainability of the planet
Resumo:
Tool path generation is one of the most complex problems in Computer Aided Manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. However, the algorithms used for tool path computation demand a higher computation performance, which makes the implementation on many existing systems very slow or even impractical. Hardware acceleration is an incremental solution that can be cleanly added to these systems while keeping everything else intact. It is completely transparent to the user. The cost is much lower and the development time is much shorter than replacing the computers by faster ones. This paper presents an optimisation that uses a specific graphic hardware approach using the power of multi-core Graphic Processing Units (GPUs) in order to improve the tool path computation. This improvement is applied on a highly accurate and robust tool path generation algorithm. The paper presents, as a case of study, a fully implemented algorithm used for turning lathe machining of shoe lasts. A comparative study will show the gain achieved in terms of total computing time. The execution time is almost two orders of magnitude faster than modern PCs.
Resumo:
"Embodies a course given by the writer for a number of years in the mathematical laboratory of the Massachusetts Institute of Technology."
Resumo:
Several numerical methods for boundary value problems use integral and differential operational matrices, expressed in polynomial bases in a Hilbert space of functions. This work presents a sequence of matrix operations allowing a direct computation of operational matrices for polynomial bases, orthogonal or not, starting with any previously known reference matrix. Furthermore, it shows how to obtain the reference matrix for a chosen polynomial base. The results presented here can be applied not only for integration and differentiation, but also for any linear operation.
Resumo:
The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
This paper is devoted to the problems of finding the load flow feasibility, saddle node, and Hopf bifurcation boundaries in the space of power system parameters. The first part contains a review of the existing relevant approaches including not-so-well-known contributions from Russia. The second part presents a new robust method for finding the power system load flow feasibility boundary on the plane defined by any three vectors of dependent variables (nodal voltages), called the Delta plane. The method exploits some quadratic and linear properties of the load now equations and state matrices written in rectangular coordinates. An advantage of the method is that it does not require an iterative solution of nonlinear equations (except the eigenvalue problem). In addition to benefits for visualization, the method is a useful tool for topological studies of power system multiple solution structures and stability domains. Although the power system application is developed, the method can be equally efficient for any quadratic algebraic problem.
Resumo:
Many models exist in the literature to explain the success of technological innovation. However, no studies have been made regarding graphic formats representing the technological innovation models and their impact, or on the understanding of these models by non-specialists in technology management. Thus, the main objective of this paper is to propose a new graphic configuration to represent the technological innovation management. Based on the literature, the innovation model is presented in the traditional format. Next, the same model is designed in the graphic format - named `the see-saw of competitiveness` - showing the interfaces among the identified factors. The two graphic formats were compared by a group of graduate students in terms of the ease in understanding the conceptual model of innovation. The statistical analysis shows that the seesaw of competitiveness is preferred.