960 resultados para Technicans in industry
Resumo:
The study is entitled “HUMAN RESOURCES DEVELOPMENT IN HIGHER EDUCATION IN KERALA”. The concept “Human Resource Development” is of high value in business and industry and has been used and applied since years. In industry and business the ‘human’ element is considred as a resource and hence its development and protection is very essential and inevitable. Of all the factors of production, human resource is the only factor having rational faculty and therefore, it must be handled with utmost care. Right recruitment, right training and right induction followed by faultless monitoring and welfare measures are but decisive factors in business and industiy. Altogether there is a constant attention up on human factor there. But this is not a practice at all in education. So far there has not been any such measure of care and close watch and performance analysis of human resource on education front. This may be the main reason for lack of accountability in the sphere of education. The present study reveals the importance of introducing HRD practices in higher educational institutions in Kerala. In order to promise human capital formation through education, it is basic requirement. The higher educational institutions must follow the method of industry and commerce because education can be treated as an industry in service sector. There also we can follow the methods of right recruitment, right training and promotion, delegation, performance analysis and accountability checking of human resource. HRD is a powerful idea of transformation of human being into highly productive and contributing factor The HRD of students is the sum total of HRD of teachers. Reminding the primordial usage ‘Yatha Raja Thadha Praja’ the quality of faculty resembles in students. The quality of administrative staff in colleges also affects the quality of higher education. Hence, it is high time to introduce the managerial method of HRD with all its paraphernalia in higher educational institutions so as to assure proper human capital formation in higher education in India.
Resumo:
Consider the statement "this project should cost X and has risk of Y". Such statements are used daily in industry as the basis for making decisions. The work reported here is part of a study aimed at providing a rational and pragmatic basis for such statements. Of particular interest are predictions made in the requirements and early phases of projects. A preliminary model has been constructed using Bayesian Belief Networks and in support of this, a programme to collect and study data during the execution of various software development projects commenced in May 2002. The data collection programme is undertaken under the constraints of a commercial industrial regime of multiple concurrent small to medium scale software development projects. Guided by pragmatism, the work is predicated on the use of data that can be collected readily by project managers; including expert judgements, effort, elapsed times and metrics collected within each project.
Resumo:
Interdisciplinary research presents particular challenges for unambiguous communication. Frequently, the meanings of words differ markedly between disciplines, leading to apparent consensus masking fundamental misunderstandings. Researchers can agree on the need for models, but conceive of models fundamentally differently. While mathematics is frequently seen as an elitist language reinforcing disciplinary distinctions, both mathematics and modelling can also offer scope to bridge disciplinary epistemological divisions and create common ground on which very different disciplines can meet. This paper reflects on the role and scope for mathematics and modelling to present a common epistemological space in interdisciplinary research spanning the social, natural and engineering sciences.
Resumo:
Transport and deposition of charged inhaled aerosols in double planar bifurcation representing generation three to five of human respiratory system has been studied under a light activity breathing condition. Both steady and oscillatory laminar inhalation airflow is considered. Particle trajectories are calculated using a Lagrangian reference frame, which is dominated by the fluid force driven by airflow, gravity force and electrostatic forces (both of space and image charge forces). The particle-mesh method is selected to calculate the space charge force. This numerical study investigates the deposition efficiency in the three-dimensional model under various particle sizes, charge values, and inlet particle distribution. Numerical results indicate that particles carrying an adequate level of charge can improve deposition efficiency in the airway model.
Resumo:
This study analyzes the regional spatial dynamics of the New York region for a period of roughly twenty years and places the effects of the 9/11 terrorist attacks in the context of longer-term regional dynamics. The analysis reveals that office-using industries are still heavily concentrated in Manhattan despite ongoing decentralization in many of these industries over the last twenty years. Financial services tend to be highly concentrated in Manhattan whereas administrative and support services are the least concentrated of the six major office-using industry groups. Although office employment has been by and large stagnant in Manhattan for at least two decades, growth of output per worker has outpaced the CMSA as well as the national average. This productivity differential is mainly attributable to competitive advantages of office-using industries in Manhattan and not to differences in industry composition. Finally, the zip-code level analysis of the Manhattan core area yielded further evidence of the existence of significant spillover effects at the small-scale level.
Resumo:
In this article, we make two important contributions to the literature on clusters. First, we provide a broader theory of cluster connectivity that has hitherto focused on organization-based pipelines and MNE subsidiaries, by including linkages in the form of personal relationships. Second, we use the lens of social network theory to derive a number of testable propositions. We propose that global linkages with decentralized network structures have the highest potential for local spillovers. In the emerging economy context, our theory implies that clusters linked to the global economy by decentralized pipelines have potential for in-depth catch-up, focused in industry and technology scope. In contrast, clusters linked through decentralized personal relationships have potential for in-breadth catch-up over a range of related industries and technologies. We illustrate our theoretical propositions by contrasting two emerging economy case studies: Bollywood, the Indian filmed entertainment cluster in Mumbai and the Indian software cluster in Bangalore.
Resumo:
Academic writing has a tendency to be turgid and impenetrable. This is not only anathema to communication between academics, but also a major barrier to advancing construction industry development. Clarity in our communication is a prerequisite to effective collaboration with industry. An exploration of what it means to be an academic in a University is presented in order to provide a context for a discussion on how academics might collaborate with industry to advance development. There are conflicting agendas that pull the academic in different directions: peer group recognition, institutional success and industry development. None can be achieved without the other, which results in the need for a careful balancing act. While academics search for better understandings and provisional explanations within the context of conceptual models, industry seeks the practical application of new ideas, whether the ideas come from research or experience. Universities have a key role to play in industry development and in economic development.
Resumo:
Modular product architectures have generated numerous benefits for companies in terms of cost, lead-time and quality. The defined interfaces and the module’s properties decrease the effort to develop new product variants, and provide an opportunity to perform parallel tasks in design, manufacturing and assembly. The background of this thesis is that companies perform verifications (tests, inspections and controls) of products late, when most of the parts have been assembled. This extends the lead-time to delivery and ruins benefits from a modular product architecture; specifically when the verifications are extensive and the frequency of detected defects is high. Due to the number of product variants obtained from the modular product architecture, verifications must handle a wide range of equipment, instructions and goal values to ensure that high quality products can be delivered. As a result, the total benefits from a modular product architecture are difficult to achieve. This thesis describes a method for planning and performing verifications within a modular product architecture. The method supports companies by utilizing the defined modules for verifications already at module level, so called MPV (Module Property Verification). With MPV, defects are detected at an earlier point, compared to verification of a complete product, and the number of verifications is decreased. The MPV method is built up of three phases. In Phase A, candidate modules are evaluated on the basis of costs and lead-time of the verifications and the repair of defects. An MPV-index is obtained which quantifies the module and indicates if the module should be verified at product level or by MPV. In Phase B, the interface interaction between the modules is evaluated, as well as the distribution of properties among the modules. The purpose is to evaluate the extent to which supplementary verifications at product level is needed. Phase C supports a selection of the final verification strategy. The cost and lead-time for the supplementary verifications are considered together with the results from Phase A and B. The MPV method is based on a set of qualitative and quantitative measures and tools which provide an overview and support the achievement of cost and time efficient company specific verifications. A practical application in industry shows how the MPV method can be used, and the subsequent benefits
Resumo:
In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.
Resumo:
Tests on printed circuit boards and integrated circuits are widely used in industry,resulting in reduced design time and cost of a project. The functional and connectivity tests in this type of circuits soon began to be a concern for the manufacturers, leading to research for solutions that would allow a reliable, quick, cheap and universal solution. Initially, using test schemes were based on a set of needles that was connected to inputs and outputs of the integrated circuit board (bed-of-nails), to which signals were applied, in order to verify whether the circuit was according to the specifications and could be assembled in the production line. With the development of projects, circuit miniaturization, improvement of the production processes, improvement of the materials used, as well as the increase in the number of circuits, it was necessary to search for another solution. Thus Boundary-Scan Testing was developed which operates on the border of integrated circuits and allows testing the connectivity of the input and the output ports of a circuit. The Boundary-Scan Testing method was converted into a standard, in 1990, by the IEEE organization, being known as the IEEE 1149.1 Standard. Since then a large number of manufacturers have adopted this standard in their products. This master thesis has, as main objective: the design of Boundary-Scan Testing in an image sensor in CMOS technology, analyzing the standard requirements, the process used in the prototype production, developing the design and layout of Boundary-Scan and analyzing obtained results after production. Chapter 1 presents briefly the evolution of testing procedures used in industry, developments and applications of image sensors and the motivation for the use of architecture Boundary-Scan Testing. Chapter 2 explores the fundamentals of Boundary-Scan Testing and image sensors, starting with the Boundary-Scan architecture defined in the Standard, where functional blocks are analyzed. This understanding is necessary to implement the design on an image sensor. It also explains the architecture of image sensors currently used, focusing on sensors with a large number of inputs and outputs.Chapter 3 describes the design of the Boundary-Scan implemented and starts to analyse the design and functions of the prototype, the used software, the designs and simulations of the functional blocks of the Boundary-Scan implemented. Chapter 4 presents the layout process used based on the design developed on chapter 3, describing the software used for this purpose, the planning of the layout location (floorplan) and its dimensions, the layout of individual blocks, checks in terms of layout rules, the comparison with the final design and finally the simulation. Chapter 5 describes how the functional tests were performed to verify the design compliancy with the specifications of Standard IEEE 1149.1. These tests were focused on the application of signals to input and output ports of the produced prototype. Chapter 6 presents the conclusions that were taken throughout the execution of the work.
Resumo:
Tin oxide, SnO2. is a very used compound in industry and one of its uses is as varistor. For the current requirements of the technology is necessary a strict control of the chemical purity and the particle size of the raw material; for that reason the great interest that exists at the moment to develop synthesis methods that allow to get these requirements. In this work, ceramic powders of the Sn-Co-Nb-Ti-Al system using the controlled precipitation and polymeric precursor (Pechini) methods were synthesized. The raw material obtained was characterized using X-ray diffraction (XRD), thermal analysis (DTA/FG) and scanning electron microscopy (SEM). The sintering samples shown a good varistor behavior with non-linear coefficient (alpha) values similar to 22, and Er 2083 V/cm(2). (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The evolution of the food sector has increased interest in the identification of new starches with distinct properties. Curcuma longa and Curcuma zedoaria rhizomes, which are already used in industry to obtain food coloring and pharmaceutical products, may become commercially interesting as starch raw materials. This work aimed to characterize the starch of two Curcuma species. The results revealed that the rhizomes of two species showed low dry matter and high starch contents. The amylose contents of the starches (22% C. longa and 21% C. zedoaria) were similar to potato starch. The results of microscopic analysis showed flat triangular shape and the size was 20-30 mum for two starches. The final viscosity of C longa was high (740 RVU) and the pasting temperature was 81 degreesC. In C. zedoaria the final viscosity was 427 RVU and the pasting temperature was 78 degreesC. These results differed from standard commercially used natural starches. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Includes bibliography
Resumo:
Includes bibliography
Resumo:
The automatic characterization of particles in metallographic images has been paramount, mainly because of the importance of quantifying such microstructures in order to assess the mechanical properties of materials common used in industry. This automated characterization may avoid problems related with fatigue and possible measurement errors. In this paper, computer techniques are used and assessed towards the accomplishment of this crucial industrial goal in an efficient and robust manner. Hence, the use of the most actively pursued machine learning classification techniques. In particularity, Support Vector Machine, Bayesian and Optimum-Path Forest based classifiers, and also the Otsu's method, which is commonly used in computer imaging to binarize automatically simply images and used here to demonstrated the need for more complex methods, are evaluated in the characterization of graphite particles in metallographic images. The statistical based analysis performed confirmed that these computer techniques are efficient solutions to accomplish the aimed characterization. Additionally, the Optimum-Path Forest based classifier demonstrated an overall superior performance, both in terms of accuracy and speed. © 2012 Elsevier Ltd. All rights reserved.