909 resultados para process concentrated work


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the business environments no longer confined to geographical borders, the new wave of digital technologies has given organizations an enormous opportunity to bring together their distributed workforce and develop the ability to work together despite being apart (Prasad & Akhilesh, 2002). resupposing creativity to be a social process, the way that this phenomenon occurs when the configuration of the team is substantially modified will be questioned. Very little is known about the impact of interpersonal relationships in the creativity (Kurtzberg & Amabile, 2001). In order to analyse the ways in which the creative process may be developed, we ought to be taken into consideration the fact that participants are dealing with a quite an atypical situation. Firstly, in these cases socialization takes place amongst individuals belonging to a geographically dispersed workplace, where interpersonal relationships are mediated by the computer, and where trust must be developed among persons who have never met one another. Participants not only have multiple addresses and locations, but above all different nationalities, and different cultures, attitudes, thoughts, and working patterns, and languages. Therefore, the central research question of this thesis is as follows: “How does the creative process unfold in globally distributed teams?” With a qualitative approach, we used the case study of the Business Unit of Volvo 3P, an arm of Volvo Group. Throughout this research, we interviewed seven teams engaged in the development of a new product in the chassis and cab areas, for the brands Volvo and Renault Trucks, teams that were geographically distributed in Brazil, Sweden, France and India. Our research suggests that corporate values, alongside with intrinsic motivation and task which lay down the necessary foundations for the development of the creative process in GDT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiometals play an important role in nuclear medicine as involved in diagnostic or therapeutic agents. In the present work the radiochemical aspects of production and processing of very promising radiometals of the third group of the periodic table, namely radiogallium and radiolanthanides are investigated. The 68Ge/68Ga generator (68Ge, T½ = 270.8 d) provides a cyclotron-independent source of positron-emitting 68Ga (T½ = 68 min), which can be used for coordinative labelling. However, for labelling of biomolecules via bifunctional chelators, particularly if legal aspects of production of radiopharmaceuticals are considered, 68Ga(III) as eluted initially needs to be pre-concentrated and purified. The first experimental chapter describes a system for simple and efficient handling of the 68Ge/68Ga generator eluates with a cation-exchange micro-chromatography column as the main component. Chemical purification and volume concentration of 68Ga(III) are carried out in hydrochloric acid – acetone media. Finally, generator produced 68Ga(III) is obtained with an excellent radiochemical and chemical purity in a minimised volume in a form applicable directly for the synthesis of 68Ga-labelled radiopharmaceuticals. For labelling with 68Ga(III), somatostatin analogue DOTA-octreotides (DOTATOC, DOTANOC) are used. 68Ga-DOTATOC and 68Ga-DOTANOC were successfully used to diagnose human somatostatin receptor-expressing tumours with PET/CT. Additionally, the proposed method was adapted for purification and medical utilisation of the cyclotron produced SPECT gallium radionuclide 67Ga(III). Second experimental chapter discusses a diagnostic radiolanthanide 140Nd, produced by irradiation of macro amounts of natural CeO2 and Pr2O3 in natCe(3He,xn)140Nd and 141Pr(p,2n)140Nd nuclear reactions, respectively. With this produced and processed 140Nd an efficient 140Nd/140Pr radionuclide generator system has been developed and evaluated. The principle of radiochemical separation of the mother and daughter radiolanthanides is based on physical-chemical transitions (hot-atom effects) of 140Pr following the electron capture process of 140Nd. The mother radionuclide 140Nd(III) is quantitatively absorbed on a solid phase matrix in the chemical form of 140Nd-DOTA-conjugated complexes, while daughter nuclide 140Pr is generated in an ionic species. With a very high elution yield and satisfactory chemical and radiolytical stability the system could able to provide the short-lived positron-emitting radiolanthanide 140Pr for PET investigations. In the third experimental chapter, analogously to physical-chemical transitions after the radioactive decay of 140Nd in 140Pr-DOTA, the rapture of the chemical bond between a radiolanthanide and the DOTA ligand, after the thermal neutron capture reaction (Szilard-Chalmers effect) was evaluated for production of the relevant radiolanthanides with high specific activity at TRIGA II Mainz nuclear reactor. The physical-chemical model was developed and first quantitative data are presented. As an example, 166Ho could be produced with a specific activity higher than its limiting value for TRIGA II Mainz, namely about 2 GBq/mg versus 0.9 GBq/mg. While free 166Ho(III) is produced in situ, it is not forming a 166Ho-DOTA complex and therefore can be separated from the inactive 165Ho-DOTA material. The analysis of the experimental data shows that radionuclides with half-life T½ < 64 h can be produced on TRIGA II Mainz nuclear reactor, with specific activity higher than any available at irradiation of simple targets e.g. oxides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis investigates the issue of work-family conflict and facilitation in a sanitarian contest, using the DISC Model (De Jonge and Dormann, 2003, 2006). The general aim has been declined in two empirical studies reported in this dissertation chapters. Chapter 1 reporting the psychometric properties of the Demand-Induced Strain Compensation Questionnaire. Although the empirical evidence on the DISC Model has received a fair amount of attention in literature both for the theoretical principles and for the instrument developed to display them (DISQ; De Jonge, Dormann, Van Vegchel, Von Nordheim, Dollard, Cotton and Van den Tooren, 2007) there are no studies based solely on psychometric investigation of the instrument. In addition, no previous studies have ever used the DISC as a model or measurement instrument in an Italian context. Thus the first chapter of the present dissertation was based on psychometric investigation of the DISQ. Chapter 2 reporting a longitudinal study contribution. The purpose was to examine, using the DISC model, the relationship between emotional job characteristics, work-family interface and emotional exhaustion among a health care population. We started testing the Triple Match Principle of the DISC Model using solely the emotional dimension of the strain-stress process (i.e. emotional demands, emotional resources and emotional exhaustion). Then we investigated the mediator role played by w-f conflict and w-f facilitation in relation to emotional job characteristics and emotional exhaustion. Finally we compared the mediator model across workers involved in chronic illness home demands and workers who are not involved. Finally, a general conclusion, integrated and discussed the main findings of the studies reported in this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a generally applicable method for the preparation of mucoadhesive micropellets of 250 to 600µm diameter is presented using rotor processing without the use of electrolytes. The mucoadhesive micropellets were developed to combine the advantages of mucoadhesion and microparticles. It was possible to produce mucoadhesive micropellets based on different mucoadhesive polymers Na-CMC, Na-alginate and chitosan. These micropellets are characterized by a lower friability (6 to 17%) when compared to industrial produced cellulose pellets (Cellets®) (41.5%). They show great tapped density and can be manufactured at high yields. The most influencing variables of the process are the water content at the of the end spraying period, determined by the liquid binder amount, the spraying rate, the inlet air temperature, the airflow and the humidity of the inlet air and the addition of the liquid binder, determined by the spraying rate, the rotor speed and the type of rotor disc. In a subsequent step a fluidized bed coating process was developed. It was possible to manifest a stable process in the Hüttlin Mycrolab® in contrast to the Mini-Glatt® apparatus. To reach enteric resistance, a 70% coating for Na-CMC micropellets, an 85% for chitosan micropellets and a 140% for Na-alginate micropellets, based on the amount of the starting micropellets, was necessary. Comparative dissolution experiments of the mucoadhesive micropellets were performed using the paddle apparatus with and without a sieve inlay, the basket apparatus, the reciprocating cylinder and flow-through cell. The paddle apparatus and the modified flow-through cell method turned out to be successful methods for the dissolution of mucoadhesive micropellets. All dissolution profiles showed an initial burst release followed by a slow release due to diffusion control. Depending on the method, the dissolution profiles changed from immediate release to slow release. The dissolution rate in the paddle apparatus was mainly influenced by the agitation rate whereas the flow-through cell pattern was mainly influenced by the particle size. Also, the logP and the HLB values of different emulsifiers were correlated to transfer HLB values of excipients into logP values and logP values of API´s into HLB values. These experiments did not show promising results. Finally, it was shown that manufacture of mucoadhesive micropellets is successful resulting in product being characterized by enteric resistency combined with high yields and convincing morphology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

(De)colonization Through Topophilia: Marjorie Kinnan Rawlings’s Life and Work in Florida attempts to reveal the author’s intimate connection to and mental growth through her place, namely the Cross Creek environs, and its subsequent effect on her writing. In 1928, Marjorie Kinnan Rawlings and her first husband Charles Rawlings came to Cross Creek, Florida. They bought the shabby farmhouse on Cross Creek Road, trying to be both, writers and farmers. However, while Charles Rawlings was unable to write in the backwoods of the Florida Interior, Rawlings found her literary voice and entered a symbiotic, reciprocal relationship with the natural world of the Cracker frontier. Her biographical preconditions – a childhood spent in the rural area of Rock Creek, outside of Washington D. C. - and a father who had instilled in her a sense of place or topophilia, enabled her to overcome severe marriage tensions and the hostile climate women writers faced during the Depression era. Nature as a helping ally and as an “undomesticated”(1) space/place is a recurrent motif throughout most of Rawlings’s Florida literature. At a time when writing the American landscape/documentary and the extraction of the self from texts was the prevalent literary genre, Marjorie Kinnan Rawlings inscribed herself into her texts. However, she knew that the American public was not yet ready for a ‘feminist revolt’, but was receptive of the longtime ‘inaudible’ voices from America’s regions, especially with regard to urban poverty and a homeward yearning during the Depression years. Fusing with the dynamic eco-consciousness of her Cracker friends and neighbors, Rawlings wrote in the literary category of regionalism enabling her to pursue three of her major aims: an individuated self, a self that assimilated with the ‘master narratives’ of her time and the recognition of the Florida Cracker and Scrub region. The first part of this dissertation briefly introduces the largely unknown and underestimated writer Marjorie Kinnan Rawlings, providing background information on her younger years, the relationship toward her family and other influential persons in her life. Furthermore, it takes a closer look at the literary category of regionalism and Rawlings’s use of ‘place’ in her writings. The second part is concerned with the ‘region’ itself, the state of Florida. It focuses on the natural peculiarities of the state’s Interior, the scrub and hammock land around her Cracker hamlet as well as the unique culture of the Florida Cracker. Part IV is concerned with the analysis of her four Florida books. The author is still widely related to the ever-popular novel The Yearling (1938). South Moon Under (1933) and Golden Apples (1935), her first two novels, have not been frequently republished and have subsequently fallen into oblivion. Cross Creek (1942), Rawlings’s last Florida book, however, has recently gained renewed popularity through its use in classes on nature writers and the non-fiction essay but it requires and is here re-evaluated as the author’s (relational) autobiography. The analysis through place is brought to completion in this work and seems to intentionally close the circle of Rawlings’s Florida writings. It exemplifies once more that detachment from place is impossible for Rawlings and that the intermingling of life and place in literature, is essential for the (re)creation of her identity. Cross Creek is therefore not only one of Rawlings’s greatest achievements; it is more importantly the key to understanding the author’s self and her fiction. Through the ‘natural’ interrelationship of place and self and by looking “mutually outward and inward,”(2) Marjorie Kinnan Rawlings finds her literary voice, a home and ‘a room of her own’ in which to write and come to consciousness. Her Florida literature is not only product but also medium and process in her assessment of her identity and self. _____________ (1) Alaimo, Stacy. Undomesticated Ground: Recasting Nature as Feminist Space (Ithaca: Cornell UP, 2000) 23. (2) Libby, Brooke. “Nature Writing as Refuge: Autobiography in the Natural World” Reading Under the Sign of Nature. New Essays in Ecocriticism. Ed. John Tallmadge and Henry Harrington. (Salt Lake City: The U of Utah P, 2000) 200.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’idea generale da cui parte l’attività di analisi e di ricerca della tesi è che l’identità non sia un dato acquisito ma un processo aperto. Processo che è portato avanti dall’interazione tra riconoscimento sociale del proprio ruolo lavorativo e rappresentazione soggettiva di sé. La categoria di lavoratori che è stata scelta è quella degli informatori scientifici del farmaco, in virtù del fatto che la loro identificazione con il ruolo professionale e la complessa costruzione identitaria è stata duramente messa alla prova negli ultimi anni a seguito di una profonda crisi che ha coinvolto la categoria. Per far fronte a questa crisi nel 2008 è stato creato un dispositivo, che ha visto il coinvolgimento di aziende, lavoratori, agenzie per il lavoro e organizzazioni sindacali, allo scopo di ricollocare il personale degli informatori scientifici del farmaco coinvolto in crisi e/o ristrutturazioni aziendali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent availability of multi-wavelength data revealed the presence of large reservoirs of warm and cold gas and dust in the innermost regions of the majority of massive elliptical galaxies. To prove an internal origin of cold and warm gas, the investigation of the spatially distributed cooling process which occurs because of non-linear density perturbations and subsequent thermal instabilities is of crucial importance. The first goal of this work of thesis is to investigate the internal origin of warm and cold phases. Numerical simulations are the powerful tool of analysis. The way in which a spatially distributed cooling process originates has been examined and the off-centre amount of gas mass which cools when different and differently characterized AGN feedback mechanisms operate has been quantified. This thesis demonstrates that the aforementioned non-linear density perturbations originate and develop from AGN feedback mechanisms in a natural fashion. An internal origin of the warm phase from the once hot gas is shown to be possible. Computed velocity dispersions of ionized and hot gas are similar. The cold gas as well can originate from the cooling process: indeed, it has been estimated that the surrounding stellar radiation, which is one of the most feasible sources of ionization of the warm gas, does not manage to keep ionized all the gas at 10^4 K. Therefore, cooled gas does undergo a further cooling which can lead the warm phase to lower temperatures. However, the gas which has cooled from the hot phase is expected to be dustless; nonetheless, a large fraction of early type galaxies has detectable dust in their cores, both concentrated in filamentary and disky structures and spread over larger regions. Therefore a regularly rotating disk of cold and dusty gas has been included in the simulations. A new quantitative investigation of the spatially distributed cooling process has therefore been essential: the contribution of the included amount of dust which is embedded in the cold gas does have a role in promoting and enhancing the cooling. The fate of dust which was at first embedded in cold gas has been investigated. The role of AGN feedback mechanisms in dragging (if able) cold and dusty gas from the core of massive ellipticals up to large radii has been studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A synthetic route was designed for the incorporation of inorganic materials within water-based miniemulsions with a complex and adjustable polymer composition. This involved co-homogenization of two inverse miniemulsions constituting precursors of the desired inorganic salt dispersed within a polymerizable continuous phase, followed by transfer to a direct miniemulsion via addition to an o/w surfactant solution with subsequent homogenization and radical polymerization. To our knowledge, this is the first work done where a polymerizable continuous phase has been used in an inverse (mini)emulsion formation followed by transfer to a direct miniemulsion, followed by polymerization, so that the result is a water-based dispersion. The versatility of the process was demonstrated by the synthesis of different inorganic pigments, but also the use of unconventional mixture of vinylic monomers and epoxy resin as the polymerizable phase (unconventional as a miniemulsion continuous phase but typical combination for coating applications). Zinc phosphate, calcium carbonate and barium sulfate were all successfully incorporated in the polymer-epoxy matrix. The choice of the system was based on a typical functional coatings system, but is not limited to. This system can be extended to incorporate various inorganic and further materials as long as the starting materials are water-soluble or hydrophilic. rnThe hybrid zinc phosphate – polymer water-based miniemulsion prepared by the above route was then applied to steel panels using autodeposition process. This is considered the first autodeposition coatings process to be carried out from a miniemulsion system containing zinc phosphate particles. Those steel panels were then tested for corrosion protection using salt spray tests. Those corrosion tests showed that the hybrid particles can protect substrate from corrosion and even improve corrosion protection, compared to a control sample where corrosion protection was performed at a separate step. Last but not least, it is suggested that corrosion protection mechanism is related to zinc phosphate mobility across the coatings film, which was proven using electron microscopy techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the first part of the research activity was to develop an aerobic cometabolic process in packed bed reactors (PBR) to treat real groundwater contaminated by trichloroethylene (TCE) and 1,1,2,2-tetrachloroethane (TeCA). In an initial screening conducted in batch bioreactors, different groundwater samples from 5 wells of the contaminated site were fed with 5 growth substrates. The work led to the selection of butane as the best growth substrate, and to the development and characterization from the site’s indigenous biomass of a suspended-cell consortium capable to degrade TCE with a 90 % mineralization of the organic chlorine. A kinetic study conducted in batch and continuous flow PBRs and led to the identification of the best carrier. A kinetic study of butane and TCE biodegradation indicated that the attached-cell consortium is characterized by a lower TCE specific degredation rates and by a lower level of mutual butane-TCE inhibition. A 31 L bioreactor was designed and set up for upscaling the experiment. The second part of the research focused on the biodegradation of 4 polymers, with and with-out chemical pre-treatments: linear low density polyethylene (LLDPE), polyethylene (PP), polystyrene (PS) and polyvinyl chloride (PVC). Initially, the 4 polymers were subjected to different chemical pre-treatments: ozonation and UV/ozonation, in gaseous and aqueous phase. It was found that, for LLDPE and PP, the coupling UV and ozone in gas phase is the most effective way to oxidize the polymers and to generate carbonyl groups on the polymer surface. In further tests, the effect of chemical pretreatment on polyner biodegrability was studied. Gas-phase ozonated and virgin polymers were incubated aerobically with: (a) a pure strain, (b) a mixed culture of bacteria; and (c) a fungal culture, together with saccharose as a co-substrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Dissertation basiert auf einem theoretischen Artikel und zwei empirischen Studien.rnrnDer theoretische Artikel: Es wird ein theoretisches Rahmenmodell postuliert, welches die Kumulierung von Arbeitsunterbrechung und deren Effekte untersucht. Die meisten bisherigen Studien haben Unterbrechungen als isoliertes Phänomen betrachtet und dabei unberücksichtigt gelassen, dass während eines typischen Arbeitstages mehrere Unterbrechungen gleichzeitig (oder aufeinanderfolgend) auftreten. In der vorliegenden Dissertation wird diese Lücke gefüllt, indem der Prozess der kumulierenden Unterbrechungen untersucht wird. Es wird beschrieben,rninwieweit die Kumulation von Unterbrechungen zu einer neuen Qualität vonrn(negativen) Effekten führt. Das Zusammenspiel und die gegenseitige Verstärkung einzelner Effekte werden dargestellt und moderierende und mediierende Faktoren aufgezeigt. Auf diese Weise ist es möglich, eine Verbindung zwischen kurzfristigen Effekten einzelner Unterbrechungen und Gesundheitsbeeinträchtigungen durch die Arbeitsbedingung ‚Unterbrechungen‘rnherzustellen.rnrnStudie 1: In dieser Studie wurde untersucht, inwieweit Unterbrechungen Leistung und Wohlbefinden einer Person innerhalb eines Arbeitstages beeinflussen. Es wurde postuliert, dass das Auftreten von Unterbrechungen die Zufriedenheit mit der eigenen Leistung vermindert und das Vergessen von Intentionen und das Irritationserleben verstärkt. Geistige Anforderung und Zeitdruck galten hierbei als Mediatoren. Um dies zu testen, wurden 133 Pflegekräften über 5 Tage hinweg mittels Smartphones befragt. Mehrebenenanalysen konnten die Haupteffekte bestätigen. Die vermuteten Mediationseffekte wurden für Irritation und (teilweise) für Zufriedenheit mit der Leistung bestätigt, nicht jedoch für Vergessen von Intentionen. Unterbrechungen führen demzufolge (u.a.) zu negativen Effekten, da sie kognitiv anspruchsvoll sind und Zeit beanspruchen.rnrnStudie 2: In dieser Studie wurden Zusammenhänge zwischen kognitiven Stressorenrn(Arbeitsunterbrechungen und Multitasking) und Beanspruchungsfolgen (Stimmung und Irritation) innerhalb eines Arbeitstages gemessen. Es wurde angenommen, dass diese Zusammenhänge durch chronologisches Alter und Indikatoren funktionalen Alters (Arbeitsgedächtniskapazität und Aufmerksamkeit) moderiert wird. Ältere mit schlechteren Aufmerksamkeitsund Arbeitsgedächtnisleistungen sollten am stärksten durch die untersuchten Stressoren beeinträchtigt werden. Es wurde eine Tagebuchstudie (siehe Studie 1) und computergestützternkognitive Leistungstests durchgeführt. Mehrebenenanalysen konnten die Haupteffekte für die abhängigen Variablen Stimmung (Valenz und Wachheit) und Irritation bestätigen, nicht jedoch für Erregung (Stimmung). Dreifachinteraktionen wurden nicht in der postulierten Richtung gefunden. Jüngere, nicht Ältere profitierten von einem hohen basalen kognitivenrnLeistungsvermögen. Ältere scheinen Copingstrategien zu besitzen, die mögliche kognitive Verluste ausgleichen. rnrnIm Allgemeinen konnten die (getesteten) Annahmen des theoretischen Rahmenmodellsrnbestätigt werden. Prinzipiell scheint es möglich, Ergebnisse der Laborforschung auf die Feldforschung zu übertragen, jedoch ist es notwendig die Besonderheiten des Feldes zu berücksichtigen. Die postulieren Mediationseffekte (Studie 1) wurden (teilweise) bestätigt. Die Ergebnisse weisen jedoch darauf hin, dass der volle Arbeitstag untersucht werden muss und dass sehr spezifische abhängige Variablen auch spezifischere Mediatoren benötigen. Des Weiteren konnte in Studie 2 bestätigt werden, dass die kognitive Kapazität eine bedeutsamernRessource im Umgang mit Unterbrechungen ist, im Arbeitskontext jedoch auch andere Ressourcen wirken.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work is to find a methodology in order to make possible the recycling of fines (0 - 4 mm) in the Construction and Demolition Waste (CDW) process. At the moment this fraction is a not desired by-product: it has high contaminant content, it has to be separated from the coarse fraction, because of its high water absorption which can affect the properties of the concrete. In fact, in some countries the use of fines recycled aggregates is highly restricted or even banned. This work is placed inside the European project C2CA (from Concrete to Cement and Clean Aggregates) and it has been held in the Faculty of Civil Engineering and Geosciences of the Technical University of Delft, in particular, in the laboratory of Resources And Recycling. This research proposes some procedures in order to close the loop of the entire recycling process. After the classification done by ADR (Advanced Dry Recovery) the two fractions "airknife" and "rotor" (that together constitute the fraction 0 - 4 mm) are inserted in a new machine that works at high temperatures. The temperatures analysed in this research are 600 °C and 750 °C, cause at that temperature it is supposed that the cement bounds become very weak. The final goal is "to clean" the coarse fraction (0,250 - 4 mm) from the cement still attached to the sand and try to concentrate the cement paste in the fraction 0 - 0,250 mm. This new set-up is able to dry the material in very few seconds, divide it into two fractions (the coarse one and the fine one) thanks to the air and increase the amount of fines (0 - 0,250 mm) promoting the attrition between the particles through a vibration device. The coarse fraction is then processed in a ball mill in order to improve the result and reach the final goal. Thanks to the high temperature it is possible to markedly reduce the milling time. The sand 0 - 2 mm, after being heated and milled is used to replace 100% of norm sand in mortar production. The results are very promising: the mortar made with recycled sand reaches an early strength, in fact the increment with respect to the mortar made with norm sand is 20% after three days and 7% after seven days. With this research it has been demonstrated that once the temperature is increased it is possible to obtain a clean coarse fraction (0,250 - 4 mm), free from cement paste that is concentrated in the fine fraction 0 - 0,250 mm. The milling time and the drying time can be largely reduced. The recycled sand shows better performance in terms of mechanical properties with respect to the natural one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.