996 resultados para process calculation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työn tavoitteena oli kehittää toimintoperusteista kustannuslaskentaa soveltamalla laskentamalli, jonka avulla on mahdollista tutkia postipalveluyrityksen tuotannollisen prosessin kokonaistuottavuuden kannalta optimaalista tapaa hyödyntää virtuaalista lajittelujärjestelmää. Kimputuksen osalta syntyi teoreettinen laskentamalli, mutta kuljetusyksiköinnin osalta pyrittiin muodostamaan todellista prosessia mahdollisimman hyvin noudattava malli, jonka tuloksia on helppo tutkia erilaisilla kustannus- ja lähetysparametreillä. Virtuaalisen lajittelujärjestelmän tuotannollisen hyödyntämisen ohella työssä oli tarkoitus toteuttaa alustavaa analyysia siitä, millä tapaa järjestelmän avulla voidaan tuottaa myös muuta lisäarvoa postipalvelutuotannolle. Analyysissa havaittiin alustavia hyödyntämismahdollisuuksia palvelutuotannon kuormitussuunnittelun, osoitelaatusidonnaisten toimintojen sekä postiin jätön ja prosessiin ohjauksen osalta. Ottamatta kantaa hyödyntämismahdollisuuksien toteutettavuuteen analyysin keskeinen tulos oli kuitenkin se, että järjestelmän hyödyntämisen osalta tutkimattomia teitä on vielä lukuisia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis quark-antiquark bound states are considered using a relativistic two-body equation for Dirac particles. The mass spectrum of mesons includes bound states involving two heavy quarks or one heavy and one light quark. In order to analyse these states within a unified formalism, it is desirable to have a two-fermion equation that limits to one body Dirac equation with a static interaction for the light quark when the other particle's mass tends to infinity. A suitable two-body equation has been developed by Mandelzweig and Wallace. This equation is solved in momentum space and is used to describe the complete spectrum of mesons. The potential used in this work contains a short range one-gluon exchange interaction and a long range linear confining and constant potential terms. This model is used to investigate the decay processes of heavy mesons. Semileptonic decays are more tractable since there is no final state interactions between the leptons and hadrons that would otherwise complicate the situation. Studies on B and D meson decays are helpful to understand the nonperturbative strong interactions of heavy mesons, which in turn is useful to extract the details of weak interaction process. Calculation of form factors of these semileptonic decays of pseudo scalar mesons are also presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modified method for the calculation of the normalized faradaic charge (q fN) is proposed. The method involves the simulation of an oxidation process, by cyclic voltammetry, by employing potentials in the oxygen evolution reaction region. The method is applicable to organic species whose oxidation is not manifested by a defined oxidation peak at conductive oxide electrodes. The variation of q fN for electrodes of nominal composition Ti/RuX Sn1-X O2 (x = 0.3, 0.2 and 0.1), Ti/Ir0.3Ti0.7O2 and Ti/Ru0.3Ti0.7O2 in the presence of various concentrations of formaldehyde was analyzed. It was observed that electrodes containing SnO2 are the most active for formaldehyde oxidation. Subsequently, in order to test the validity of the proposed model, galvanostatic electrolyses (40 mA cm-2) of two different formaldehyde concentrations (0.10 and 0.01 mol dm-3) were performed. The results are in agreement with the proposed model and indicate that this new method can be used to determine the relative activity of conductive oxide electrodes. In agreement with previous studies, it can be concluded that not only the nature of the electrode material, but also the organic species in solution and its concentration are important factors to be considered in the oxidation of organic compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate complete characterization of a two-qubit entangling process-a linear optics controlled-NOT gate operating with coincident detection-by quantum process tomography. We use a maximum-likelihood estimation to convert the experimental data into a physical process matrix. The process matrix allows an accurate prediction of the operation of the gate for arbitrary input states and a calculation of gate performance measures such as the average gate fidelity, average purity, and entangling capability of our gate, which are 0.90, 0.83, and 0.73, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In studies of the natural history of HIV-1 infection, the time scale of primary interest is the time since infection. Unfortunately, this time is very often unknown for HIV infection and using the follow-up time instead of the time since infection is likely to provide biased results because of onset confounding. Laboratory markers such as the CD4 T-cell count carry important information concerning disease progression and can be used to predict the unknown date of infection. Previous work on this topic has made use of only one CD4 measurement or based the imputation on incident patients only. However, because of considerable intrinsic variability in CD4 levels and because incident cases are different from prevalent cases, back calculation based on only one CD4 determination per person or on characteristics of the incident sub-cohort may provide unreliable results. Therefore, we propose a methodology based on the repeated individual CD4 T-cells marker measurements that use both incident and prevalent cases to impute the unknown date of infection. Our approach uses joint modelling of the time since infection, the CD4 time path and the drop-out process. This methodology has been applied to estimate the CD4 slope and impute the unknown date of infection in HIV patients from the Swiss HIV Cohort Study. A procedure based on the comparison of different slope estimates is proposed to assess the goodness of fit of the imputation. Results of simulation studies indicated that the imputation procedure worked well, despite the intrinsic high volatility of the CD4 marker.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tavoitteena oli kuvata ja ottaa käyttöön sahauseräkohtaisen kannattavuuden laskentamenetelmä sahalle, sekä tehdä laskentamalli menetelmän tueksi. Sahauksen peruskäsitteiden jälkeen työssä on esitelty sahan tuotantoprosessi. Tuotantoprosessi on kuvattu kirjallisuuden ja asiantuntijoiden haastattelujen perusteella. Seuraavaksi kartoitettiin hyötyjä ja vaikutuksia, mitä laskentamenetelmältä odotetaan.. Kustannuslaskennan teoriaa selvitettiin kirjallisuuslähteitä käyttäen silmälläpitäen juuri tätä kehitettävää laskentamenetelmää. Lisäksi esiteltiin Uimaharjun sahalla käytettävät ja laskentaan liittyvät laskenta- ja tietojärjestelmät.Nykyisin sahalla ei ole minkäänlaista menetelmää sahauseräkohtaisen tuloksen laskemiseksi. Pienillä muutoksilla sahan tietojärjestelmään ja prosessikoneisiin voidaan sahauserä kuljettaa prosessin läpi niin, että jokaisessa prosessin vaiheessa sille saadaan kohdistettua tuotantotietoa. Eri vaiheista saatua tietoa käyttämällä saadaan tarkasti määritettyä tuotteet, joita sahauserä tuotti ja paljonko tuotantoresursseja tuottamiseen kului. Laskentamalliin syötetään tuotantotietoja ja kustannustietoa ja saadaan vastaukseksi sahauserän taloudellinen tulos.Toimenpide ehdotuksena esitetään lisätutkimusta tuotantotietojen automaattisesta keräämisestä manuaalisen työn ja virheiden poistamiseksi. Suhteellisen pienillä panoksilla voidaan jokaiselle sahauserälle kerätä tuotantotiedot täysin automaattisesti. Lisäksi kehittämäni laskentamallin tilalle tulisi hankkia sovellus, joka käyttäisi paremmin hyväksi nykyisiä tietojärjestelmiä ja poistaisi manuaalisen työvaiheen laskennassa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

APROS (Advanced Process Simulation Environment) is a computer simulation program developed to simulate thermal hydraulic processes in nuclear and conventional power plants. Earlier research at VTT Technological Research Centre of Finland had found the current version of APROS to produce inaccurate simulation results for a certain case of loop seal clearing. The objective of this Master’s thesis is to find and implement an alternative method for calculating the rate of stratification in APROS, which was found to be the reason for the inaccuracies. Brief literature study was performed and a promising candidate for the new method was found. The new method was implemented into APROS and tested against experiments and simulations from two test facilities and the current version of APROS. Simulation results with the new version were partially conflicting; in some cases the new method was more accurate than the current version, in some the current method was better. Overall, the new method can be assessed as an improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change has given an impetus to research and developed new technologies to reduce significantly carbon dioxide emissions in energy production in the developed countries. The major pollution source, fossil fuels, will be used as an energy source for many decades, which provides the demand for carbon capture and storage technologies. Over recent years many new technologies has been developed and one of the most promising is calcium-looping in post-combustion carbon capture process, which use carbonation-calcination cycle to capture carbon dioxide from the flue gas of a combustion process. First pilot plant for calcium-looping process has been built in Oviedo, Spain. In this study, a three-dimensional model has been created for the calciner, which is one of the two fluidized bed reactors needed for the process. The calciner is a regenerator where the captured carbon dioxide is removed from the calcium material and then collected after the reactor. Thesis concentrates in creating the calciner 3D-model frame with CFB3D-program and testing the model with two different example cases. Used input parameters and calciner geometry are Oviedo pilot plant design parameters. The calculation results give information about the process and show that pilot plant calciner should perform as planned. This Master’s Thesis is done in participation to EU FP7 project CaOling.