969 resultados para input method
Resumo:
A new method is used to estimate the volumes of sediments of glacial valleys. This method is based on the concept of sloping local base level and requires only a digital terrain model and the limits of the alluvial valleys as input data. The bedrock surface of the glacial valley is estimated by a progressive excavation of the digital elevation model (DEM) of the filled valley area. This is performed using an iterative routine that replaces the altitude of a point of the DEM by the mean value of its neighbors minus a fixed value. The result is a curved surface, quadratic in 2D. The bedrock surface of the Rhone Valley in Switzerland was estimated by this method using the free digital terrain model Shuttle Radar Topography Mission (SRTM) (~92 m resolution). The results obtained are in good agreement with the previous estimations based on seismic profiles and gravimetric modeling, with the exceptions of some particular locations. The results from the present method and those from the seismic interpretation are slightly different from the results of the gravimetric data. This discrepancy may result from the presence of large buried landslides in the bottom of the Rhone Valley.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
Requirements-relatedissues have been found the third most important risk factor in software projects and as the biggest reason for software project failures. This is not a surprise since; requirements engineering (RE) practices have been reported deficient inmore than 75% of all; enterprises. A problem analysis on small and low maturitysoftware organizations revealed two; central reasons for not starting process improvement efforts: lack of resources and uncertainty; about process improvementeffort paybacks.; In the constructive part of the study a basic RE method, BaRE, was developed to provide an; easy to adopt way to introduce basic systematic RE practices in small and low maturity; organizations. Based on diffusion of innovations literature, thirteen desirable characteristics; were identified for the solution and the method was implemented in five key components:; requirements document template, requirements development practices, requirements; management practices, tool support for requirements management, and training.; The empirical evaluation of the BaRE method was conducted in three industrial case studies. In; this evaluation, two companies established a completely new RE infrastructure following the; suggested practices while the third company conducted continued requirements document; template development based on the provided template and used it extensively in practice. The; real benefits of the adoption of the method were visible in the companies in four to six months; from the start of the evaluation project, and the two small companies in the project completed; their improvement efforts with an input equal to about one person month. The collected dataon; the case studies indicates that the companies implemented new practices with little adaptations; and little effort. Thus it can be concluded that the constructed BaRE method is indeed easy to; adopt and it can help introduce basic systematic RE practices in small organizations.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
The numerous methods for calculating the potential or reference evapotranspiration (ETo or ETP) almost always do for a 24-hour period, including values of climatic parameters throughout the nocturnal period (daily averages). These results have a nil effect on transpiration, constituting the main evaporative demand process in cases of localized irrigation. The aim of the current manuscript was to come up with a model rather simplified for the calculation of diurnal daily ETo. It deals with an alternative approach based on the theoretical background of the Penman method without having to consider values of aerodynamic conductance of latent and sensible heat fluxes, as well as data of wind speed and relative humidity of the air. The comparison between the diurnal values of ETo measured in weighing lysimeters with elevated precision and estimated by either the Penman-Monteith method or the Simplified-Penman approach in study also points out a fairly consistent agreement among the potential demand calculation criteria. The Simplified-Penman approach was a feasible alternative to estimate ETo under the local meteorological conditions of two field trials. With the availability of the input data required, such a method could be employed in other climatic regions for scheduling irrigation.
Resumo:
High strength steel (HSS) has been in use in workshops since the 1980s. At that time, the significance of the term HSS differed from the modern conception as the maximum yield strength of HSSs has increased nearly every year. There are three different ways to make HSS. The first and oldest method is QT (quenched and tempered) followed by the TMCP (thermomechanical controlled process) and DQ (direct quenching) methods. This thesis consists of two parts, the first of which part introduces the research topic and discusses welded HSS structures by characterizing the most important variables. In the second part of the thesis, the usability of welded HSS structures is examined through a set of laboratory tests. The results of this study explain the differences in the usability of the welded HSSs made by the three different methods. The results additionally indicate that usage of different HSSs in the welded structures presumes that manufacturers know what kind of HSS they are welding. As manufacturers use greater strength HSSs in welded structures, the demands for welding rise as well. Therefore, during the manufacturing process, factors such as heat input, cooling time, weld quality, and more must be under careful observation.
Resumo:
The demand for more efficient manufacturing processes has been increasing in the last few years. The cold forging process is presented as a possible solution, because it allows the production of parts with a good surface finish and with good mechanical properties. Nevertheless, the cold forming sequence design is very empirical and it is based on the designer experience. The computational modeling of each forming process stage by the finite element method can make the sequence design faster and more efficient, decreasing the use of conventional "trial and error" methods. In this study, the application of a commercial general finite element software - ANSYS - has been applied to model a forming operation. Models have been developed to simulate the ring compression test and to simulate a basic forming operation (upsetting) that is applied in most of the cold forging parts sequences. The simulated upsetting operation is one stage of the automotive starter parts manufacturing process. Experiments have been done to obtain the stress-strain material curve, the material flow during the simulated stage, and the required forming force. These experiments provided results used as numerical model input data and as validation of model results. The comparison between experiments and numerical results confirms the developed methodology potential on die filling prediction.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
The aim of this work was to calibrate the material properties including strength and strain values for different material zones of ultra-high strength steel (UHSS) welded joints under monotonic static loading. The UHSS is heat sensitive and softens by heat due to welding, the affected zone is heat affected zone (HAZ). In this regard, cylindrical specimens were cut out from welded joints of Strenx® 960 MC and Strenx® Tube 960 MH, were examined by tensile test. The hardness values of specimens’ cross section were measured. Using correlations between hardness and strength, initial material properties were obtained. The same size specimen with different zones of material same as real specimen were created and defined in finite element method (FEM) software with commercial brand Abaqus 6.14-1. The loading and boundary conditions were defined considering tensile test values. Using initial material properties made of hardness-strength correlations (true stress-strain values) as Abaqus main input, FEM is utilized to simulate the tensile test process. By comparing FEM Abaqus results with measured results of tensile test, initial material properties will be revised and reused as software input to be fully calibrated in such a way that FEM results and tensile test results deviate minimum. Two type of different S960 were used including 960 MC plates, and structural hollow section 960 MH X-joint. The joint is welded by BöhlerTM X96 filler material. In welded joints, typically the following zones appear: Weld (WEL), Heat affected zone (HAZ) coarse grained (HCG) and fine grained (HFG), annealed zone, and base material (BaM). Results showed that: The HAZ zone is softened due to heat input while welding. For all the specimens, the softened zone’s strength is decreased and makes it a weakest zone where fracture happens while loading. Stress concentration of a notched specimen can represent the properties of notched zone. The load-displacement diagram from FEM modeling matches with the experiments by the calibrated material properties by compromising two correlations of hardness and strength.
Resumo:
We apply to the Senegalese input-output matrix of 1990, disagregated into formal and informal activities, a recently designed structural analytical method (Minimal-Flow-Analysis) which permits to depict the direct and indirect production likanges existing between activities.
Resumo:
L’Organisation mondiale de la santé animale (OIE) est l’institution internationale responsable de la mise en place des mesures sanitaires associées aux échanges commerciaux d’animaux vivants. Le zonage est une méthode de contrôle recommandée par l’OIE pour certaines maladies infectieuses, dont l’influenza aviaire. Les éclosions d’influenza aviaire été extrêmement coûteuses pour l’industrie avicole partout dans le monde. Afin d’évaluer la possibilité d’user de cette approche en Ontario, les données sur les sites de production avicole ont été fournies par les fédérations d’éleveurs de volailles ce cette province. L’information portant sur les industries associées à la production avicole, soit les meuneries, les abattoirs, les couvoirs, et les usines de classification d’œufs, a été obtenue par l’entremise de plusieurs sources, dont des représentants de l’industrie avicole. Des diagrammes de flux a été crée afin de comprendre les interactions entre les sites de production et les industries associées à ceux-ci. Ces industries constituaient les éléments de bas nécessaires au zonage. Cette analyse a permis de créer une base de données portant sur intrants et extrants de production pour chaque site d’élevage avicole, ainsi que pour les sites de production des industries associées à l’aviculture. À l’aide du logiciel ArcGIS, cette information a été fusionnée à des données géospatiales de Statistique Canada de l’Ontario et du Québec. La base de données résultante a permis de réaliser les essais de zonage. Soixante-douze essais ont été réalisés. Quatre ont été retenus car celles minimisaient de façon similaire les pertes de production de l’industrie. Ces essais montrent que la méthode utilisée pour l’étude du zonage peut démontrer les déficits et les surplus de production de l’industrie avicole commerciale en Ontario. Ceux-ci pourront servir de point de départ lors des discussions des intervenants de l’industrie avicole, étant donné que la coopération et la communication sont essentielles au succès du zonage.
Resumo:
Infolge der durch die internationalen Schulvergleichstests eingeleiteten empirischen Wende in der Erziehungswissenschaft hat sich die Aufmerksamkeit vom Input schulischen Lehrens und Lernens zunehmend auf die Ergebnisse (Output) bzw. Wirkungen (Outcomes) verlagert. Die Kernfrage lautet nun: Was kommt am Ende in der Schule bzw. im Unterricht eigentlich heraus? Grundlegende Voraussetzung ergebnisorienterter Steuerung schulischen Unterrichts ist die Formulierung von Bildungsstandards. Wie Bildungsstandards mit Kompetenzmodellen und konkreten Aufgabenstellungen im Unterricht des Faches "Politik & Wirtschaft" verknüpft werden können, wird in diesem Beitrag einer genaueren Analyse unterzogen. Vor dem Hintergrund bildungstheoretischer Vorstellungen im Anschluss an Immanuel Kant kommen dabei das Literacy-Konzept der Pisa-Studie sowie die "Dokumentarische Methode" nach Karl Mannheim zur Anwendung.
Resumo:
Soil microorganisms have evolved two possible mechanisms for their uptake of organic N: the direct route and the mobilization-immobilization-turnover (MIT) route. In the direct route, simple organic molecules are taken up via various mechanisms directly into the cell. In the MIT route, the deamination occurs outside the cell and all N is mineralized to NH4+ before assimilation. A better understanding of the mechanisms controlling the different uptake routes of soil microorganisms under different environmental conditions is crucial for understanding mineralization processes of organic material in soil. For the first experiment we incubated soil samples from the long term trial in Bad Lauchstädt with corn residues with different C to N ratios and inorganic N for 21 days at 20 °C. Under the assumption that all added amino acids were taken up or mineralized, the direct uptake route was more important in soil amended with corn residues with a wide C to N ratio. After 21 days of incubation the direct uptake of added amino acids increased in the order addition of corn residue with a: “C to N ratio of 40 & (NH4)2SO4 and no addition (control)” (69% and 68%, respectively) < “C to N ratio of 20” (73%) < “C to N ratio of 40” (95%). In all treatments the proportion of the added amino acids that were mineralized increased with time, indicating that the MIT route became more important over time. To investigate the effects of soil depth on the N uptake route of soil microorganisms (experiment II), soil samples in two soil depths (0-5 cm; 30-40 cm) were incubated with corn residues with different C to N ratios and inorganic N for 21 days at 20 °C and 60% (WHC). The addition of corn residue resulted in a marked increase of protease activity in both depths due to the induction from the added substrate. Addition of corn residue with a wide C to N ratio resulted in a significantly greater part of the direct uptake (97% and 94%) than without the addition of residues (85% and 80%) or addition of residue with a small C to N ratio (90% and 84%) or inorganic N (91% and 79% in the surface soil and subsoil, respectively), suggesting that under conditions of sufficient mineralizable N (C to N ratio of 20) or increased concentrations of NH4+, the enzyme system involved in the direct uptake is slightly repressed. Substrate additions resulted in an initially significantly higher increase of the direct uptake in the surface soil than in the subsoil. As a large proportion of the organic N input into soil is in form of proteinaceous material, the deamination of amino acids is a key reaction of the MIT route. Therefore the enzyme amino acid oxidase contribute to the extracellular N mineralization in soil. The objective of experiment III was to adapt a method to determine amino acid oxidase in soil. The detection via synthetic fluorescent Lucifer Yellow derivatives of the amino acid lysine is possible in soil. However, it was not possible to find the substrate concentration at which the reaction rate is independent of substrate concentration and therefore we were not able to develop a valid soil enzyme assay.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
The purpose of this paper is to design a control law for continuous systems with Boolean inputs allowing the output to track a desired trajectory. Such systems are controlled by items of commutation. This type of systems, with Boolean inputs, has found increasing use in the electric industry. Power supplies include such systems and a power converter represents one of theses systems. For instance, in power electronics the control variable is the switching OFF and ON of components such as thyristors or transistors. In this paper, a method is proposed for the designing of a control law in state space for such systems. This approach is implemented in simulation for the control of an electronic circuit.