935 resultados para work process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a generally applicable method for the preparation of mucoadhesive micropellets of 250 to 600µm diameter is presented using rotor processing without the use of electrolytes. The mucoadhesive micropellets were developed to combine the advantages of mucoadhesion and microparticles. It was possible to produce mucoadhesive micropellets based on different mucoadhesive polymers Na-CMC, Na-alginate and chitosan. These micropellets are characterized by a lower friability (6 to 17%) when compared to industrial produced cellulose pellets (Cellets®) (41.5%). They show great tapped density and can be manufactured at high yields. The most influencing variables of the process are the water content at the of the end spraying period, determined by the liquid binder amount, the spraying rate, the inlet air temperature, the airflow and the humidity of the inlet air and the addition of the liquid binder, determined by the spraying rate, the rotor speed and the type of rotor disc. In a subsequent step a fluidized bed coating process was developed. It was possible to manifest a stable process in the Hüttlin Mycrolab® in contrast to the Mini-Glatt® apparatus. To reach enteric resistance, a 70% coating for Na-CMC micropellets, an 85% for chitosan micropellets and a 140% for Na-alginate micropellets, based on the amount of the starting micropellets, was necessary. Comparative dissolution experiments of the mucoadhesive micropellets were performed using the paddle apparatus with and without a sieve inlay, the basket apparatus, the reciprocating cylinder and flow-through cell. The paddle apparatus and the modified flow-through cell method turned out to be successful methods for the dissolution of mucoadhesive micropellets. All dissolution profiles showed an initial burst release followed by a slow release due to diffusion control. Depending on the method, the dissolution profiles changed from immediate release to slow release. The dissolution rate in the paddle apparatus was mainly influenced by the agitation rate whereas the flow-through cell pattern was mainly influenced by the particle size. Also, the logP and the HLB values of different emulsifiers were correlated to transfer HLB values of excipients into logP values and logP values of API´s into HLB values. These experiments did not show promising results. Finally, it was shown that manufacture of mucoadhesive micropellets is successful resulting in product being characterized by enteric resistency combined with high yields and convincing morphology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

(De)colonization Through Topophilia: Marjorie Kinnan Rawlings’s Life and Work in Florida attempts to reveal the author’s intimate connection to and mental growth through her place, namely the Cross Creek environs, and its subsequent effect on her writing. In 1928, Marjorie Kinnan Rawlings and her first husband Charles Rawlings came to Cross Creek, Florida. They bought the shabby farmhouse on Cross Creek Road, trying to be both, writers and farmers. However, while Charles Rawlings was unable to write in the backwoods of the Florida Interior, Rawlings found her literary voice and entered a symbiotic, reciprocal relationship with the natural world of the Cracker frontier. Her biographical preconditions – a childhood spent in the rural area of Rock Creek, outside of Washington D. C. - and a father who had instilled in her a sense of place or topophilia, enabled her to overcome severe marriage tensions and the hostile climate women writers faced during the Depression era. Nature as a helping ally and as an “undomesticated”(1) space/place is a recurrent motif throughout most of Rawlings’s Florida literature. At a time when writing the American landscape/documentary and the extraction of the self from texts was the prevalent literary genre, Marjorie Kinnan Rawlings inscribed herself into her texts. However, she knew that the American public was not yet ready for a ‘feminist revolt’, but was receptive of the longtime ‘inaudible’ voices from America’s regions, especially with regard to urban poverty and a homeward yearning during the Depression years. Fusing with the dynamic eco-consciousness of her Cracker friends and neighbors, Rawlings wrote in the literary category of regionalism enabling her to pursue three of her major aims: an individuated self, a self that assimilated with the ‘master narratives’ of her time and the recognition of the Florida Cracker and Scrub region. The first part of this dissertation briefly introduces the largely unknown and underestimated writer Marjorie Kinnan Rawlings, providing background information on her younger years, the relationship toward her family and other influential persons in her life. Furthermore, it takes a closer look at the literary category of regionalism and Rawlings’s use of ‘place’ in her writings. The second part is concerned with the ‘region’ itself, the state of Florida. It focuses on the natural peculiarities of the state’s Interior, the scrub and hammock land around her Cracker hamlet as well as the unique culture of the Florida Cracker. Part IV is concerned with the analysis of her four Florida books. The author is still widely related to the ever-popular novel The Yearling (1938). South Moon Under (1933) and Golden Apples (1935), her first two novels, have not been frequently republished and have subsequently fallen into oblivion. Cross Creek (1942), Rawlings’s last Florida book, however, has recently gained renewed popularity through its use in classes on nature writers and the non-fiction essay but it requires and is here re-evaluated as the author’s (relational) autobiography. The analysis through place is brought to completion in this work and seems to intentionally close the circle of Rawlings’s Florida writings. It exemplifies once more that detachment from place is impossible for Rawlings and that the intermingling of life and place in literature, is essential for the (re)creation of her identity. Cross Creek is therefore not only one of Rawlings’s greatest achievements; it is more importantly the key to understanding the author’s self and her fiction. Through the ‘natural’ interrelationship of place and self and by looking “mutually outward and inward,”(2) Marjorie Kinnan Rawlings finds her literary voice, a home and ‘a room of her own’ in which to write and come to consciousness. Her Florida literature is not only product but also medium and process in her assessment of her identity and self. _____________ (1) Alaimo, Stacy. Undomesticated Ground: Recasting Nature as Feminist Space (Ithaca: Cornell UP, 2000) 23. (2) Libby, Brooke. “Nature Writing as Refuge: Autobiography in the Natural World” Reading Under the Sign of Nature. New Essays in Ecocriticism. Ed. John Tallmadge and Henry Harrington. (Salt Lake City: The U of Utah P, 2000) 200.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’idea generale da cui parte l’attività di analisi e di ricerca della tesi è che l’identità non sia un dato acquisito ma un processo aperto. Processo che è portato avanti dall’interazione tra riconoscimento sociale del proprio ruolo lavorativo e rappresentazione soggettiva di sé. La categoria di lavoratori che è stata scelta è quella degli informatori scientifici del farmaco, in virtù del fatto che la loro identificazione con il ruolo professionale e la complessa costruzione identitaria è stata duramente messa alla prova negli ultimi anni a seguito di una profonda crisi che ha coinvolto la categoria. Per far fronte a questa crisi nel 2008 è stato creato un dispositivo, che ha visto il coinvolgimento di aziende, lavoratori, agenzie per il lavoro e organizzazioni sindacali, allo scopo di ricollocare il personale degli informatori scientifici del farmaco coinvolto in crisi e/o ristrutturazioni aziendali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A synthetic route was designed for the incorporation of inorganic materials within water-based miniemulsions with a complex and adjustable polymer composition. This involved co-homogenization of two inverse miniemulsions constituting precursors of the desired inorganic salt dispersed within a polymerizable continuous phase, followed by transfer to a direct miniemulsion via addition to an o/w surfactant solution with subsequent homogenization and radical polymerization. To our knowledge, this is the first work done where a polymerizable continuous phase has been used in an inverse (mini)emulsion formation followed by transfer to a direct miniemulsion, followed by polymerization, so that the result is a water-based dispersion. The versatility of the process was demonstrated by the synthesis of different inorganic pigments, but also the use of unconventional mixture of vinylic monomers and epoxy resin as the polymerizable phase (unconventional as a miniemulsion continuous phase but typical combination for coating applications). Zinc phosphate, calcium carbonate and barium sulfate were all successfully incorporated in the polymer-epoxy matrix. The choice of the system was based on a typical functional coatings system, but is not limited to. This system can be extended to incorporate various inorganic and further materials as long as the starting materials are water-soluble or hydrophilic. rnThe hybrid zinc phosphate – polymer water-based miniemulsion prepared by the above route was then applied to steel panels using autodeposition process. This is considered the first autodeposition coatings process to be carried out from a miniemulsion system containing zinc phosphate particles. Those steel panels were then tested for corrosion protection using salt spray tests. Those corrosion tests showed that the hybrid particles can protect substrate from corrosion and even improve corrosion protection, compared to a control sample where corrosion protection was performed at a separate step. Last but not least, it is suggested that corrosion protection mechanism is related to zinc phosphate mobility across the coatings film, which was proven using electron microscopy techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the first part of the research activity was to develop an aerobic cometabolic process in packed bed reactors (PBR) to treat real groundwater contaminated by trichloroethylene (TCE) and 1,1,2,2-tetrachloroethane (TeCA). In an initial screening conducted in batch bioreactors, different groundwater samples from 5 wells of the contaminated site were fed with 5 growth substrates. The work led to the selection of butane as the best growth substrate, and to the development and characterization from the site’s indigenous biomass of a suspended-cell consortium capable to degrade TCE with a 90 % mineralization of the organic chlorine. A kinetic study conducted in batch and continuous flow PBRs and led to the identification of the best carrier. A kinetic study of butane and TCE biodegradation indicated that the attached-cell consortium is characterized by a lower TCE specific degredation rates and by a lower level of mutual butane-TCE inhibition. A 31 L bioreactor was designed and set up for upscaling the experiment. The second part of the research focused on the biodegradation of 4 polymers, with and with-out chemical pre-treatments: linear low density polyethylene (LLDPE), polyethylene (PP), polystyrene (PS) and polyvinyl chloride (PVC). Initially, the 4 polymers were subjected to different chemical pre-treatments: ozonation and UV/ozonation, in gaseous and aqueous phase. It was found that, for LLDPE and PP, the coupling UV and ozone in gas phase is the most effective way to oxidize the polymers and to generate carbonyl groups on the polymer surface. In further tests, the effect of chemical pretreatment on polyner biodegrability was studied. Gas-phase ozonated and virgin polymers were incubated aerobically with: (a) a pure strain, (b) a mixed culture of bacteria; and (c) a fungal culture, together with saccharose as a co-substrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diese Dissertation basiert auf einem theoretischen Artikel und zwei empirischen Studien.rnrnDer theoretische Artikel: Es wird ein theoretisches Rahmenmodell postuliert, welches die Kumulierung von Arbeitsunterbrechung und deren Effekte untersucht. Die meisten bisherigen Studien haben Unterbrechungen als isoliertes Phänomen betrachtet und dabei unberücksichtigt gelassen, dass während eines typischen Arbeitstages mehrere Unterbrechungen gleichzeitig (oder aufeinanderfolgend) auftreten. In der vorliegenden Dissertation wird diese Lücke gefüllt, indem der Prozess der kumulierenden Unterbrechungen untersucht wird. Es wird beschrieben,rninwieweit die Kumulation von Unterbrechungen zu einer neuen Qualität vonrn(negativen) Effekten führt. Das Zusammenspiel und die gegenseitige Verstärkung einzelner Effekte werden dargestellt und moderierende und mediierende Faktoren aufgezeigt. Auf diese Weise ist es möglich, eine Verbindung zwischen kurzfristigen Effekten einzelner Unterbrechungen und Gesundheitsbeeinträchtigungen durch die Arbeitsbedingung ‚Unterbrechungen‘rnherzustellen.rnrnStudie 1: In dieser Studie wurde untersucht, inwieweit Unterbrechungen Leistung und Wohlbefinden einer Person innerhalb eines Arbeitstages beeinflussen. Es wurde postuliert, dass das Auftreten von Unterbrechungen die Zufriedenheit mit der eigenen Leistung vermindert und das Vergessen von Intentionen und das Irritationserleben verstärkt. Geistige Anforderung und Zeitdruck galten hierbei als Mediatoren. Um dies zu testen, wurden 133 Pflegekräften über 5 Tage hinweg mittels Smartphones befragt. Mehrebenenanalysen konnten die Haupteffekte bestätigen. Die vermuteten Mediationseffekte wurden für Irritation und (teilweise) für Zufriedenheit mit der Leistung bestätigt, nicht jedoch für Vergessen von Intentionen. Unterbrechungen führen demzufolge (u.a.) zu negativen Effekten, da sie kognitiv anspruchsvoll sind und Zeit beanspruchen.rnrnStudie 2: In dieser Studie wurden Zusammenhänge zwischen kognitiven Stressorenrn(Arbeitsunterbrechungen und Multitasking) und Beanspruchungsfolgen (Stimmung und Irritation) innerhalb eines Arbeitstages gemessen. Es wurde angenommen, dass diese Zusammenhänge durch chronologisches Alter und Indikatoren funktionalen Alters (Arbeitsgedächtniskapazität und Aufmerksamkeit) moderiert wird. Ältere mit schlechteren Aufmerksamkeitsund Arbeitsgedächtnisleistungen sollten am stärksten durch die untersuchten Stressoren beeinträchtigt werden. Es wurde eine Tagebuchstudie (siehe Studie 1) und computergestützternkognitive Leistungstests durchgeführt. Mehrebenenanalysen konnten die Haupteffekte für die abhängigen Variablen Stimmung (Valenz und Wachheit) und Irritation bestätigen, nicht jedoch für Erregung (Stimmung). Dreifachinteraktionen wurden nicht in der postulierten Richtung gefunden. Jüngere, nicht Ältere profitierten von einem hohen basalen kognitivenrnLeistungsvermögen. Ältere scheinen Copingstrategien zu besitzen, die mögliche kognitive Verluste ausgleichen. rnrnIm Allgemeinen konnten die (getesteten) Annahmen des theoretischen Rahmenmodellsrnbestätigt werden. Prinzipiell scheint es möglich, Ergebnisse der Laborforschung auf die Feldforschung zu übertragen, jedoch ist es notwendig die Besonderheiten des Feldes zu berücksichtigen. Die postulieren Mediationseffekte (Studie 1) wurden (teilweise) bestätigt. Die Ergebnisse weisen jedoch darauf hin, dass der volle Arbeitstag untersucht werden muss und dass sehr spezifische abhängige Variablen auch spezifischere Mediatoren benötigen. Des Weiteren konnte in Studie 2 bestätigt werden, dass die kognitive Kapazität eine bedeutsamernRessource im Umgang mit Unterbrechungen ist, im Arbeitskontext jedoch auch andere Ressourcen wirken.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a group of experienced and novice youth workers, we believe that youth work is fundamentally about building trust-filled, mutually respectful relationships with young people. We create safe environments for young people to connect with other supportive adults and peers and to avoid violence in their neighborhoods and their homes. We guide those harmed by oppressive community conditions such as racism, sexism, agism, homophobia, and classism through a process of healing. As we get to know more about young people’s interests, we help them develop knowledge and skills in a variety of areas including: academic, athletic, leadership/civic, the arts, health and wellbeing, and career exploration. In short, we create transformative experiences for young people. In spite of the critical roles we play, we have largely been overlooked in youth development research, policy, and as a professional workforce. We face challenges ‘moving up’ in our careers. We get frustrated by how little money we earn. We are discouraged that despite our knowledge and experience we are not invited to the tables where youth funding, programming, and policy decisions are made. It is true—many of us do not have formal training or degrees in youth work—a reality which at times we regret. Yet, as our colleague communicates in the accompanying passage (see below), we resent that formal education is required for us to get ahead, particularly because we question whether we need it to do our jobs more effectively. Through the “What is the Value of Youth Work?” symposium, we hope to address these concerns through a dialogue about youth work with the following objectives: • Increase awareness of the knowledge, skills, contributions, and professionalism of youth workers; • Advance a youth worker professional development model that integrates a dilemma-focused approach with principles of social justice youth development; • Launch an ongoing Worcester area Youth Worker network. This booklet provides a brief overview of the challenges in ‘professionalizing’ youth work and an alternative approach that we are advancing that puts the knowledge and expertise of youth workers at the center of professional development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Environmental Process and Simulation Center (EPSC) at Michigan Technological University started accommodating laboratories for an Environmental Engineering senior level class CEE 4509 Environmental Process and Simulation Laboratory since 2004. Even though the five units that exist in EPSC provide the students opportunities to have hands-on experiences with a wide range of water/wastewater treatment technologies, a key module was still missing for the student to experience a full cycle of treatment. This project fabricated a direct-filtration pilot system in EPSC and generated a laboratory manual for education purpose. Engineering applications such as clean bed head loss calculation, backwash flowrate determination, multimedia density calculation and run length prediction are included in the laboratory manual. The system was tested for one semester and modifications have been made both to the direct filtration unit and the laboratory manual. Future work is also proposed to further refine the module.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the demand for miniature products and components continues to increase, the need for manufacturing processes to provide these products and components has also increased. To meet this need, successful macroscale processes are being scaled down and applied at the microscale. Unfortunately, many challenges have been experienced when directly scaling down macro processes. Initially, frictional effects were believed to be the largest challenge encountered. However, in recent studies it has been found that the greatest challenge encountered has been with size effects. Size effect is a broad term that largely refers to the thickness of the material being formed and how this thickness directly affects the product dimensions and manufacturability. At the microscale, the thickness becomes critical due to the reduced number of grains. When surface contact between the forming tools and the material blanks occur at the macroscale, there is enough material (hundreds of layers of material grains) across the blank thickness to compensate for material flow and the effect of grain orientation. At the microscale, there may be under 10 grains across the blank thickness. With a decreased amount of grains across the thickness, the influence of the grain size, shape and orientation is significant. Any material defects (either natural occurring or ones that occur as a result of the material preparation) have a significant role in altering the forming potential. To date, various micro metal forming and micro materials testing equipment setups have been constructed at the Michigan Tech lab. Initially, the research focus was to create a micro deep drawing setup to potentially build micro sensor encapsulation housings. The research focus shifted to micro metal materials testing equipment setups. These include the construction and testing of the following setups: a micro mechanical bulge test, a micro sheet tension test (testing micro tensile bars), a micro strain analysis (with the use of optical lithography and chemical etching) and a micro sheet hydroforming bulge test. Recently, the focus has shifted to study a micro tube hydroforming process. The intent is to target fuel cells, medical, and sensor encapsulation applications. While the tube hydroforming process is widely understood at the macroscale, the microscale process also offers some significant challenges in terms of size effects. Current work is being conducted in applying direct current to enhance micro tube hydroforming formability. Initially, adding direct current to various metal forming operations has shown some phenomenal results. The focus of current research is to determine the validity of this process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Writing center scholarship and practice have approached how issues of identity influence communication but have not fully considered ways of making identity a key feature of writing center research or practice. This dissertation suggests a new way to view identity -- through an experience of "multimembership" or the consideration that each identity is constructed based on the numerous community memberships that make up that identity. Etienne Wenger (1998) proposes that a fully formed identity is ultimately impossible, but it is through the work of reconciling memberships that important individual and community transformations can occur. Since Wenger also argues that reconciliation "is the most significant challenge" for those moving into new communities of practice (or, "engage in a process of collective learning in a shared domain of human endeavor" (4)), yet this challenge often remains tacit, this dissertation examines and makes explicit how this important work is done at two different research sites - a university writing center (the Michigan Tech Multiliteracies Center) and at a multinational corporation (Kimberly-Clark Corporation). Drawing extensively on qualitative ethnographic methods including interview transcriptions, observations, and case studies, as well as work from scholars in writing center studies (Grimm, Denney, Severino), literacy studies (New London Group, Street, Gee), composition (Horner and Trimbur, Canagarajah, Lu), rhetoric (Crowley), and identity studies (Anzaldua, Pratt), I argue that, based on evidence from the two sites, writing centers need to educate tutors to not only take identity into consideration, but to also make individuals' reconciliation work more visible, as it will continue once students and tutors leave the university. Further, as my research at the Michigan Tech Multiliteracies Center and Kimberly-Clark will show, communities can (and should) change their practices in ways that account for reconciliation work as identity, communication, and learning are inextricably bound up with one another.