918 resultados para Additive Fertigungsverfahren, Wirtschaftlichkeit, Qualität, Pre-Process,
Resumo:
Mittels generativer Fertigung ist es heute möglich die, Entwicklungszeit und Ferti-gungsdauer von Prototypen, Produkten und Werkzeugen zu verkürzen. Neben dieser Zeitersparnis sind die im Vergleich zu konventionellen Fertigungsverfahren unwe-sentlichen Geometriebeschränkungen für den Anwender von besonderem Interesse. Dieses Alleinstellungsmerkmal der generativen Fertigung macht es möglich auch komplexe Geometrie wirtschaftlich herzustellen. Voraussetzung für eine wirtschaftli-che und fehlerminimierte Fertigung ist hierbei eine möglichst optimale Prozessvorbe-reitung (Pre-Processing). Dabei sind insbesondere die Schritte der Bauteilorientie-rung, der Stützkonstruktionserzeugung, der Schichtzerlegung sowie der Bauraum-ausnutzung von Interesse. Auch wenn diese Punkte wesentlich zur Qualität und Wirtschaftlichkeit beitragen, sind die Erkenntnisse für den unerfahrenen Anwender nur unzureichend dokumentiert, wodurch eine möglichst effiziente Fertigung zu-nächst ausgeschlossen werden kann. Anhand unterschiedlicher Beispiele sollen dem Anwender hier die Möglichkeiten zur Optimierung dieser Pre-Processing Schritte er-läutert werden. In diesem Rahmen werden die aktuellen Forschungsergebnisse des Lehrstuhls Rechnereinsatz in der Konstruktion, Institut für Produkt Engineering der Universität Duisburg-Essen in Bezug auf die Optimierung der Bauteilorientierung, der variablen Schichtzerlegung und der Optimierung der Bauraumausnutzung vorgestellt.
Resumo:
Additive Fertigungsverfahren eignen sich für die wirtschaftliche Herstellung von Bauteilen im Bereich kleiner bis mittlerer Stückzahlen, da keine Formen oder Spezialwerkzeuge notwendig sind. Die erzielbaren Eigenschaften sind oftmals bereits ausreichend, um einen Einsatz auch in Serienanwendungen zu ermöglichen. Verbunden mit den Vorteilen der Technologie bezüglich einer hohen Flexibilität, sowohl während der Konstruktion als auch der Fertigung, können sich durch eine konsequente Nutzung finanzielle Einsparmöglichkeiten entlang des gesamten Produktlebenszyklus ergeben. Bezüglich der Wirtschaftlichkeit der Verfahren herrscht oftmals noch Unklarheit, da geeignete Methoden fehlen, um diese zu bewerten. Bestehende Methoden und Werkzeuge zur Bewertung der Wirtschaftlichkeit konventioneller Fertigungsverfahren sind dabei für die additive Fertigung nicht direkt nutzbar. In dem Artikel wird eine Methode zur modellgestützten Abbildung einer gesamten additiven Fertigungskette vorgestellt, welche auch die Wechselwirkungen zwischen den einzelnen Prozesskettengliedern berücksichtigen soll. Eine konkrete Aussage bezüglich der Wirtschaftlichkeit der additiven Fertigung soll somit ermöglicht werden.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
调研了国内外天然气预处理和液化方面的最新文献,并对各种工艺技术进行了比较和分析.天然气预处理的工艺技术主要包括:酸性气体的脱除、水的脱除以及汞及重烃的脱除等;液化工艺流程主要有阶式制冷循环、混合制冷剂制冷循环、膨胀机制冷循环.各种液化方式都有自己的工艺特点.阶式制冷循环能耗最小,是目前天然气液化循环中效率最高的一种;混合制冷剂循环同阶式制冷循环相比,混合制冷液化循环具有流程简单、机组少、投资费用低、对制冷剂的纯度要求不高等优点;膨胀机循环能够较迅速和简单地启动和停工,但功耗较大.
Resumo:
In this paper, a novel algorithm for removing facial makeup disturbances as a face detection preprocess based on high dimensional imaginal geometry is proposed. After simulation and practical application experiments, the algorithm is theoretically analyzed. Its apparent effect of removing facial makeup and the advantages of face detection with this pre-process over face detection without it are discussed. Furthermore, in our experiments with color images, the proposed algorithm even gives some surprises.
Resumo:
根据主UUV观测系统测量的从UUV方位信息精度高、距离信息精度低的特点,将遗忘因子和位置权值构成的综合权值融入递推最小二乘算法(RLS)用于从UUV航行参数分析,避免采用EKF算法对观测噪声要求高的缺陷,克服数据饱和现象。同时对从UUV方位信息进行预处理以提高航行参数估计的收敛速度。仿真实验证明了方法的有效性。
Resumo:
Q. Shen. Rough feature selection for intelligent classifiers. LNCS Transactions on Rough Sets, 7:244-255, 2007.
Resumo:
In the age of E-Business many companies faced with massive data sets that must be analysed for gaining a competitive edge. these data sets are in many instances incomplete and quite often not of very high quality. Although statistical analysis can be used to pre-process these data sets, this technique has its own limitations. In this paper we are presenting a system - and its underlying model - that can be used to test the integrity of existing data and pre-process the data into clearer data sets to be mined. LH5 is a rule-based system, capable of self-learning and is illustrated using a medical data set.
Resumo:
In recognizing 11 official languages, the 1996 South African Constitution provides a context for the management of diversity with important implications for the redistribution of wealth and power. The development and implementation of the language-in-education policies which might be expected to flow from the Constitution, however, have been slow and ineffective. One of the casualties of government procrastination has been African language publishing. In the absence of well-resourced bilingual education, most learners continue to be taught through the medium of English as a second language. Teachers are reluctant to use more innovative pedagogies without the support of adequate African language materials and publishers are cautious about producing such materials. Nonetheless, activity in this sector offers many opportunities for African language speakers. This paper explores the challenges and constraints for African language publishing for children and argues that market forces and language policy need to work in mutually reinforcing ways. Further progress is necessarily dependent on the political will to implement language-in-education policies that promote additive bilingualism and, in the process, guarantee sales for risk-averse publishers.
Resumo:
The evidence provided by modelled assessments of future climate impact on flooding is fundamental to water resources and flood risk decision making. Impact models usually rely on climate projections from global and regional climate models (GCM/RCMs). However, challenges in representing precipitation events at catchment-scale resolution mean that decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs. Here the impacts on projected high flows of differing ensemble approaches and application of Model Output Statistics to RCM precipitation are evaluated while assessing climate change impact on flood hazard in the Upper Severn catchment in the UK. Various ensemble projections are used together with the HBV hydrological model with direct forcing and also compared to a response surface technique. We consider an ensemble of single-model RCM projections from the current UK Climate Projections (UKCP09); multi-model ensemble RCM projections from the European Union's FP6 ‘ENSEMBLES’ project; and a joint probability distribution of precipitation and temperature from a GCM-based perturbed physics ensemble. The ensemble distribution of results show that flood hazard in the Upper Severn is likely to increase compared to present conditions, but the study highlights the differences between the results from different ensemble methods and the strong assumptions made in using Model Output Statistics to produce the estimates of future river discharge. The results underline the challenges in using the current generation of RCMs for local climate impact studies on flooding. Copyright © 2012 Royal Meteorological Society
Resumo:
In this work, the quantitative analysis of glucose, triglycerides and cholesterol (total and HDL) in both rat and human blood plasma was performed without any kind of pretreatment of samples, by using near infrared spectroscopy (NIR) combined with multivariate methods. For this purpose, different techniques and algorithms used to pre-process data, to select variables and to build multivariate regression models were compared between each other, such as partial least squares regression (PLS), non linear regression by artificial neural networks, interval partial least squares regression (iPLS), genetic algorithm (GA), successive projections algorithm (SPA), amongst others. Related to the determinations of rat blood plasma samples, the variables selection algorithms showed satisfactory results both for the correlation coefficients (R²) and for the values of root mean square error of prediction (RMSEP) for the three analytes, especially for triglycerides and cholesterol-HDL. The RMSEP values for glucose, triglycerides and cholesterol-HDL obtained through the best PLS model were 6.08, 16.07 e 2.03 mg dL-1, respectively. In the other case, for the determinations in human blood plasma, the predictions obtained by the PLS models provided unsatisfactory results with non linear tendency and presence of bias. Then, the ANN regression was applied as an alternative to PLS, considering its ability of modeling data from non linear systems. The root mean square error of monitoring (RMSEM) for glucose, triglycerides and total cholesterol, for the best ANN models, were 13.20, 10.31 e 12.35 mg dL-1, respectively. Statistical tests (F and t) suggest that NIR spectroscopy combined with multivariate regression methods (PLS and ANN) are capable to quantify the analytes (glucose, triglycerides and cholesterol) even when they are present in highly complex biological fluids, such as blood plasma
Identificação e estimação de ruído em redes DSL: uma abordagem baseada em inteligência computacional
Resumo:
Este trabalho propõe a utilização de técnicas de inteligência computacional objetivando identificar e estimar a potencia de ruídos em redes Digital Subscriber Line ou Linhas do Assinante Digital (DSL) em tempo real. Uma metodologia baseada no Knowledge Discovery in Databases ou Descobrimento de Conhecimento em Bases de Dados (KDD) para detecção e estimação de ruídos em tempo real, foi utilizada. KDD é aplicado para selecionar, pré-processar e transformar os dados antes da etapa de aplicação dos algoritmos na etapa de mineração de dados. Para identificação dos ruídos o algoritmo tradicional backpropagation baseado em Redes Neurais Artificiais (RNA) é aplicado objetivando identificar o tipo de ruído em predominância durante a coleta das informações do modem do usuário e da central. Enquanto, para estimação o algoritmo de regressão linear e o algoritmo híbrido composto por Fuzzy e regressão linear foram aplicados para estimar a potência em Watts de ruído crosstalk ou diafonia na rede. Os resultados alcançados demonstram que a utilização de algoritmos de inteligência computacional como a RNA são promissores para identificação de ruídos em redes DSL, e que algoritmos como de regressão linear e Fuzzy com regressão linear (FRL) são promissores para a estimação de ruídos em redes DSL.
Resumo:
In der vorliegenden Arbeit wird zum einen ein Instrument zur Erfassung der Patient-Therapeut-Bindung validiert (Client Attachment to Therapist Scale, CATS; Mallinckrodt, Coble & Gantt, 1995), zum anderen werden Hypothesen zu den Zusammenhängen zwischen Selbstwirksamkeitserwartung, allgemeinem Bindungsstil, therapeutischer Beziehung (bzw. Therapiezufriedenheit), Patient-Therapeut-Bindung und Therapieerfolg bei Drogen-abhängigen in stationärer Postakutbehandlung überprüft. In die Instrumentenvalidierung (einwöchiger Retest) wurden 119 Patienten aus 2 Kliniken und 13 Experten einbezogen. Die Gütekriterien des Instrumentes fallen sehr zufriedenstellend aus. An der naturalistischen Therapieevaluationsstudie (Prä-, Prozess-, Post-Messung: T0, T1, T2) nahmen 365 Patienten und 27 Therapeuten aus 4 Kliniken teil. Insgesamt beendeten 44,1% der Patienten ihren stationären Aufenthalt planmäßig. Auf Patientenseite erweisen sich Alter und Hauptdiagnose, auf Therapeutenseite die praktizierte Therapierichtung als Therapieerfolgsprädiktoren. Selbstwirksamkeitserwartung, allgemeiner Bindungsstil, Patient-Therapeut-Bindung und Therapiezufriedenheit eignen sich nicht zur Prognose des Therapieerfolgs. Die zu T0 stark unterdurchschnittlich ausgeprägte Selbstwirksamkeits-erwartung steigert sich über den Interventionszeitraum, wobei sich ein Moderatoreffekt der Patient-Therapeut-Bindung beobachten lässt. Es liegt eine hohe Prävalenz unsicherer allgemeiner Bindungsstile vor, welche sich über den Therapiezeitraum nicht verändern. Die patientenseitige Zufriedenheit mit der Therapie steigt von T1 zu T2 an. Die Interrater-Konkordanz (Patient/Therapeut) zur Einschätzung der Patient-Therapeut-Bindung erhöht sich leicht von T1 zu T2. Im Gegensatz dazu wird die Therapiezufriedenheit von Patienten und Therapeuten zu beiden Messzeitpunkten sehr unterschiedlich beurteilt. Die guten Testgütekriterien der CATS sprechen für eine Überlegenheit dieses Instrumentes gegenüber der Skala zur Erfassung der Therapiezufriedenheit. Deshalb sollte die Patient-Therapeut-Bindung anhand dieses Instrumentes in weiteren Forschungsarbeiten an anderen Patientenkollektiven untersucht werden, um generalisierbare Aussagen zur Validität treffen zu können.
Resumo:
The design of plastics profile extrusion dies becomes increasingly more complex so that conventional manufacture processes reach their limit in the die manufacture. A feasible manufacture of arbitrarily designed dies is only possible by additive manufacturing. An especially promising process is hereby the Selective Laser Melting with which metal parts with series identical mechanical properties can be produced without the need for part specific tooling or downstream sintering processes. Disadvantegeous is, however, the relatively rough surface of additively manufactured parts. Against this background, the manufacturing of an profile extrusion die by Selective Laser Melting and the plastics profile surface quality, that can be achieved with such dies, is investigated. For this purpose, profiles are extruded both with an additively manufactured die and a conventionally milled sample of the same die geometry. In case of the additively manufactured die a concept for the surface finishing of the flow channel is required, which can be applied to arbitrarily shaped geometries. Therefore, two different reworking processes are applied only to the die land of the flow channel. The comparison of the surface roughnesses shows that the additively manufactured die with a polished die land delivers the same surface quality as the conventional die.
Resumo:
The Microarray technique is rather powerful, as it allows to test up thousands of genes at a time, but this produces an overwhelming set of data files containing huge amounts of data, which is quite difficult to pre-process, separate, classify and correlate for interesting conclusions to be extracted. Modern machine learning, data mining and clustering techniques based on information theory, are needed to read and interpret the information contents buried in those large data sets. Independent Component Analysis method can be used to correct the data affected by corruption processes or to filter the uncorrectable one and then clustering methods can group similar genes or classify samples. In this paper a hybrid approach is used to obtain a two way unsupervised clustering for a corrected microarray data.