829 resultados para computer-aided engineering tool
Resumo:
Ion implantation modifies the surface composition and properties of materials by bombardment with high energy ions. The low temperature of the process ensures the avoidance of distortion and degradation of the surface or bulk mechanical properties of components. In the present work nitrogen ion implantation at 90 keV and doses above 1017 ions/cm2 has been carried out on AISI M2, D2 and 420 steels and engineering coatings such as hard chromium, electroless Ni-P and a brush plated Co-W alloy. Evaluation of wear and frictional properties of these materials was performed with a lubricated Falex wear test at high loads up to 900 N and a dry pin-on-disc apparatus at loads up to 40 N. It was found that nitrogen implantation reduced the wear of AISI 420 stainless steel by a factor of 2.5 under high load lubricated conditions and by a factor of 5.5 in low load dry testing. Lower but significant reductions in wear were achieved for AISI M2 and D2 steels. Wear resistance of coating materials was improved by up to 4 times in lubricated wear of hard Cr coatings implanted at the optimum dose but lower improvements were obtained for the Co-W alloy coating. However, hardened electroless Ni-P coatings showed no enhancement in wear properties. The benefits obtained in wear behaviour for the above materials were generally accompanied by a significant decrease in the running-in friction. Nitrogen implantation hardened the surface of steels and Cr and Co-W coatings. An ultra-microhardness technique showed that the true hardness of implanted layers was greater than the values obtained by conventional micro-hardness methods, which often result in penetration below the implanted depth. Scanning electron microscopy revealed that implantation reduced the ploughing effect during wear and a change in wear mechanism from an abrasive-adhesive type to a mild oxidative mode was evident. Retention of nitrogen after implantation was studied by Nuclear Reaction Analysis and Auger Electron Spectroscopy. It was shown that maximum nitrogen retention occurs in hard Cr coatings and AISI 420 stainless steel, which explains the improvements obtained in wear resistance and hardness. X-ray photoelectron spectroscopy on these materials revealed that nitrogen is almost entirely bound to Cr, forming chromium nitrides. It was concluded that nitrogen implantation at 90 keV and doses above 3x1017 ions/cm2 produced the most significant improvements in mechanical properties in materials containing nitride formers by precipitation strengthening, improving the load bearing capacity of the surface and changing the wear mechanism from adhesive-abrasive to oxidative.
Resumo:
With the demand for engineering graduates at what may be defined as an unprecedented high, many universities find themselves facing significant levels of student attrition-with high "drop-out levels" being a major issue in engineering education. In order to address this, Aston University in the UK has radically changed its undergraduate engineering education curriculum, introducing capstone CDIO (Conceive, Design, Implement, Operate) modules for all first year students studying Mechanical Engineering and Design. The introduction of CDIO is aimed at making project / problem based learning the norm. Utilising this approach, the learning and teaching in engineering purposefully aims to promote innovative thinking, thus equipping students with high-level problem-solving skills in a way that builds on theory whilst enhancing practical competencies and abilities. This chapter provides an overview of an Action Research study undertaken contemporaneously with the development, introduction, and administration of the first two semesters of CDIO. It identifies the challenges and benefits of the approach and concludes by arguing that whilst CDIO is hard work for staff, it can make a real difference to students' learning experiences, thereby positively impacting retention. © 2012, IGI Global.
Resumo:
* The research work reviewed in this paper has been carried out in the context of the Russian Foundation for Basic Research funded project “Adaptable Intelligent Interfaces Research and Development for Distance Learning Systems”(grant N 02-01-81019). The authors wish to acknowledge the co-operation with the Byelorussian partners of this project.
Resumo:
This research examines evolving issues in applied computer science and applies economic and business analyses as well. There are two main areas. The first is internetwork communications as embodied by the Internet. The goal of the research is to devise an efficient pricing, prioritization, and incentivization plan that could be realistically implemented on the existing infrastructure. Criteria include practical and economic efficiency, and proper incentives for both users and providers. Background information on the evolution and functional operation of the Internet is given, and relevant literature is surveyed and analyzed. Economic analysis is performed on the incentive implications of the current pricing structure and organization. The problems are identified, and minimally disruptive solutions are proposed for all levels of implementation to the lowest level protocol. Practical issues are considered and performance analyses are done. The second area of research is mass market software engineering, and how this differs from classical software engineering. Software life-cycle revenues are analyzed and software pricing and timing implications are derived. A profit maximizing methodology is developed to select or defer the development of software features for inclusion in a given release. An iterative model of the stages of the software development process is developed, taking into account new communications capabilities as well as profitability. ^
Resumo:
This dissertation introduces a novel automated book reader as an assistive technology tool for persons with blindness. The literature shows extensive work in the area of optical character recognition, but the current methodologies available for the automated reading of books or bound volumes remain inadequate and are severely constrained during document scanning or image acquisition processes. The goal of the book reader design is to automate and simplify the task of reading a book while providing a user-friendly environment with a realistic but affordable system design. This design responds to the main concerns of (a) providing a method of image acquisition that maintains the integrity of the source (b) overcoming optical character recognition errors created by inherent imaging issues such as curvature effects and barrel distortion, and (c) determining a suitable method for accurate recognition of characters that yields an interface with the ability to read from any open book with a high reading accuracy nearing 98%. This research endeavor focuses in its initial aim on the development of an assistive technology tool to help persons with blindness in the reading of books and other bound volumes. But its secondary and broader aim is to also find in this design the perfect platform for the digitization process of bound documentation in line with the mission of the Open Content Alliance (OCA), a nonprofit Alliance at making reading materials available in digital form. The theoretical perspective of this research relates to the mathematical developments that are made in order to resolve both the inherent distortions due to the properties of the camera lens and the anticipated distortions of the changing page curvature as one leafs through the book. This is evidenced by the significant increase of the recognition rate of characters and a high accuracy read-out through text to speech processing. This reasonably priced interface with its high performance results and its compatibility to any computer or laptop through universal serial bus connectors extends greatly the prospects for universal accessibility to documentation.
Resumo:
Software Engineering is one of the most widely researched areas of Computer Science. The ability to reuse software, much like reuse of hardware components is one of the key issues in software development. The object-oriented programming methodology is revolutionary in that it promotes software reusability. This thesis describes the development of a tool that helps programmers to design and implement software from within the Smalltalk Environment (an Object- Oriented programming environment). The ASDN tool is part of the PEREAM (Programming Environment for the Reuse and Evolution of Abstract Models) system, which advocates incremental development of software. The Asdn tool along with the PEREAM system seeks to enhance the Smalltalk programming environment by providing facilities for structured development of abstractions (concepts). It produces a document that describes the abstractions that are developed using this tool. The features of the ASDN tool are illustrated by an example.
Resumo:
Drawing upon critical, communications, and educational theories, this thesis develops a novel framing of the problem of social risk in the extractive sector, as it relates to the building of respectful relationships with indigenous peoples. Building upon Bakhtin’s dialogism, the thesis demonstrates the linkage of this aspect of social risk to professional education, and specifically, to the undergraduate mining engineering curriculum, and develops a framework for the development of skills related to intercultural competence in the education of mining engineers. The knowledge of social risk, as well as the level of intercultural competence, of students in the mining engineering program, is investigated through a mixture of surveys and focus groups – as is the impact of specific learning interventions. One aspect of this investigation is whether development of these attributes alters graduates’ conception of their identity as mining engineers, i.e. the range and scope of responsibilities, and understanding of to whom responsibilities are owed, and their role in building trusting relationships with communities. Survey results demonstrate that student openness to the perspectives of other cultures increases with exposure to the second year curriculum. Students became more knowledgeable about social dimensions of responsible mining, but not about cultural dimensions. Analysis of focus group data shows that students are highly motivated to improve community perspectives and acceptance. It is observed that students want to show respect for diverse peoples and communities where they will work, but they are hampered by their inability to appreciate the viewpoints of people who do not share their values. They embrace benefit sharing and environmental protection as norms, but they mistakenly conclude that opposition to mining is rooted in a lack of education rather than in cultural values. Three, sequential, threshold concepts are identified as impeding development of intercultural competence: Awareness and Acknowledgement of Different Forms of Knowledge; Recognition that Value Systems are a Function of Culture; Respect for varied perceptions of Social Wellbeing and Quality of Life. Future curriculum development in the undergraduate mining engineering program, as well as in other educational programs relevant to the extractive sector, can be effectively targeted by focusing on these threshold concepts.
Resumo:
This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component.
Resumo:
This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.
Resumo:
Much of the bridge stock on major transport links in North America and Europe was constructed in the 1950s and 1960s and has since deteriorated or is carrying loads far in excess of the original design loads. Structural Health Monitoring Systems (SHM) can provide valuable information on the bridge capacity but the application of such systems is currently limited by access and bridge type. This paper investigates the use of computer vision systems for SHM. A series of field tests have been carried out to test the accuracy of displacement measurements using contactless methods. A video image of each test was processed using a modified version of the optical flow tracking method to track displacement. These results have been validated with an established measurement method using linear variable differential transformers (LVDTs). The results obtained from the algorithm provided an accurate comparison with the validation measurements. The calculated displacements agree within 2% of the verified LVDT measurements, a number of post processing methods were then applied to attempt to reduce this error.
Resumo:
Using product and system design to influence user behaviour offers potential for improving performance and reducing user error, yet little guidance is available at the concept generation stage for design teams briefed with influencing user behaviour. This article presents the Design with Intent Method, an innovation tool for designers working in this area, illustrated via application to an everyday human–technology interaction problem: reducing the likelihood of a customer leaving his or her card in an automatic teller machine. The example application results in a range of feasible design concepts which are comparable to existing developments in ATM design, demonstrating that the method has potential for development and application as part of a user-centred design process.
Resumo:
This thesis covers the challenges of creating and maintaining an introductory engineering laboratory. The history of the University of Illinois Electrical and Computer Engineering department’s introductory course, ECE 110, is recounted. The current state of the course, as of Fall 2008, is discussed along with current challenges arising from the use of a hand-wired prototyping board with logic gates. A plan for overcoming these issues using a new microcontroller-based board with a pseudo hardware description language is discussed. The new microcontroller based system implementation is extensively detailed along with its new accompanying description language. This new system was tried in several sections of the Fall 2008 semester alongside the old system; the students’ final performances with the two different approaches are compared in terms of design, performance, complexity, and enjoyment. The system in its first run shows great promise, increasing the students’ enjoyment, and improving the performance of their designs.
Reservoir system analysis, conservation : Hydrologic Engineering Center computer program 23-J2-L253.
Resumo:
At head of cover title: Generalized computer program.
Resumo:
Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.