956 resultados para Semi-Regenerative Process
Resumo:
Die Zielsetzung der Arbeit besteht darin, neue Ansätze zur Herstellung strukturierter Kompositpartikel in wässrigem Medium zu entwickeln, welche als die Bildung genau definierter heterogener Strukturen in Kolloidsystemen angesehen werden können. Im Allgemeinen wurden zwei verschiedene Herangehensweisen entwickelt, die sich aufgrund des Ursprungs der gebildeten heterogenen Strukturen unterscheiden: Heterogenität oder Homogenität. Der Erste Ansatz basiert auf der Aggregation heterogener Phasen zur Bildung strukturierter Kolloidpartikel mit Heterogenität in der zugrunde liegenden Chemie, während der Zweite Ansatz auf der Bildung heterogener Phasen in Kolloidpartikeln aus homogenen Mischungen heraus durch kontrollierte Phasenseparation beruht.rnIm Detail beschäftigt sich der erste Teil der Dissertation mit einer neuen Herstellungsmethode für teilkristalline Komposit-Kolloidpartikel mit hoher Stabilität basierend auf der Aggregation flüssiger Monomertropfen an teilkristalline Polyacrylnitrilpartikel. Nach der Aggregation wurden hochstabile Dispersionen bestehend aus strukturierten, teilkristallinen Kompositpartikeln durch freie radikalische Polymerisation erhalten, während ein direktes Mischen der PAN Dispersionen mit Methacrylat-Polymerdispersionen zur unmittelbaren Koagulation führte. In Abhängigkeit von der Glastemperatur des Methacrylatpolymers führt die anschließende freie radikalische Polymerisation zur Bildung von Rasberry oder Kern-Schale Partikeln. Die auf diese Weise hergestellten Partikel sind dazu in der Lage, kontinuierliche Filme mit eingebetteten teilkristallinen Phasen zu bilden, welche als Sauerstoffbarriere Anwendung finden können.rnDer zweite Teil der Dissertation beschreibt eine neue Methode zur Herstellung strukturierter Duroplast-Thermoplast Komposit-Kolloidpartikel. Die Bildung eines Duroplastnetzwerks mit einer thermoplastischen Hülle wurde in zwei Schritten durch verschiedene, separate Polymerisationsmechanismen erreicht: Polyaddition und freie radikalische Polymerisation. Es wurden stabile Miniemulsionen erhalten, welche aus Bisphenol-F basiertem Epoxidharz, Phenalkamin-basiertem Härter und Vinlymonomere bestehen. Sie wurden durch Ultraschall mit nachfolgender Härtung bei verschiedenen Temperaturen als sogenannte Seed-Emulsionen hergestellt. Weitere Vinylmonomere wurden hinzugegeben und nachfolgend polymerisiert, was zur Bildung von Kern-Schale, beziehungsweise Duroplast-Thermoplast Kolloidpartikeln führte. Dabei findet in beiden Fällen zwischen der duroplastischen und der thermoplastischen Phase eine chemisch induzierte Phasenseparation statt, welche essenziell für die Bildung heterogener Strukturen ist. Die auf diese Weise hergestellten Kompositpartikel sind dazu in der Lage, transparente Filme zu bilden, welche unter geeigneten Bedingungen deutlich verbesserte mechanische Eigenschaften im Vergleich zu reinen Duroplastfilmen bereitstellen.rn
Resumo:
Für das Vermögen der Atmosphäre sich selbst zu reinigen spielen Stickstoffmonoxid (NO) und Stickstoffdioxid (NO2) eine bedeutende Rolle. Diese Spurengase bestimmen die photochemische Produktion von Ozon (O3) und beeinflussen das Vorkommen von Hydroxyl- (OH) und Nitrat-Radikalen (NO3). Wenn tagsüber ausreichend Solarstrahlung und Ozon vorherrschen, stehen NO und NO2 in einem schnellen photochemischen Gleichgewicht, dem „Photostationären Gleichgewichtszustand“ (engl.: photostationary state). Die Summe von NO und NO2 wird deshalb als NOx zusammengefasst. Vorhergehende Studien zum photostationären Gleichgewichtszustand von NOx umfassen Messungen an unterschiedlichsten Orten, angefangen bei Städten (geprägt von starken Luftverschmutzungen), bis hin zu abgeschiedenen Regionen (geprägt von geringeren Luftverschmutzungen). Während der photochemische Kreislauf von NO und NO2 unter Bedingungen erhöhter NOx-Konzentrationen grundlegend verstanden ist, gibt es in ländlicheren und entlegenen Regionen, welche geprägt sind von niedrigeren NOx-Konzetrationen, signifikante Lücken im Verständnis der zugrundeliegenden Zyklierungsprozesse. Diese Lücken könnten durch messtechnische NO2-Interferenzen bedingt sein - insbesondere bei indirekten Nachweismethoden, welche von Artefakten beeinflusst sein können. Bei sehr niedrigen NOx-Konzentrationen und wenn messtechnische NO2-Interferenzen ausgeschlossen werden können, wird häufig geschlussfolgert, dass diese Verständnislücken mit der Existenz eines „unbekannten Oxidationsmittels“ (engl.: unknown oxidant) verknüpft ist. Im Rahmen dieser Arbeit wird der photostationäre Gleichgewichtszustand von NOx analysiert, mit dem Ziel die potenzielle Existenz bislang unbekannter Prozesse zu untersuchen. Ein Gasanalysator für die direkte Messung von atmosphärischem NO¬2 mittels laserinduzierter Fluoreszenzmesstechnik (engl. LIF – laser induced fluorescence), GANDALF, wurde neu entwickelt und während der Messkampagne PARADE 2011 erstmals für Feldmessungen eingesetzt. Die Messungen im Rahmen von PARADE wurden im Sommer 2011 in einem ländlich geprägten Gebiet in Deutschland durchgeführt. Umfangreiche NO2-Messungen unter Verwendung unterschiedlicher Messtechniken (DOAS, CLD und CRD) ermöglichten einen ausführlichen und erfolgreichen Vergleich von GANDALF mit den übrigen NO2-Messtechniken. Weitere relevante Spurengase und meteorologische Parameter wurden gemessen, um den photostationären Zustand von NOx, basierend auf den NO2-Messungen mit GANDALF in dieser Umgebung zu untersuchen. Während PARADE wurden moderate NOx Mischungsverhältnisse an der Messstelle beobachtet (10^2 - 10^4 pptv). Mischungsverhältnisse biogener flüchtige Kohlenwasserstoffverbindungen (BVOC, engl.: biogenic volatile organic compounds) aus dem umgebenden Wald (hauptsächlich Nadelwald) lagen in der Größenordnung 10^2 pptv vor. Die Charakteristiken des photostationären Gleichgewichtszustandes von NOx bei niedrigen NOx-Mischungsverhältnissen (10 - 10^3 pptv) wurde für eine weitere Messstelle in einem borealen Waldgebiet während der Messkampagne HUMPPA-COPEC 2010 untersucht. HUMPPA–COPEC–2010 wurde im Sommer 2010 in der SMEARII-Station in Hyytiälä, Süd-Finnland, durchgeführt. Die charakteristischen Eigenschaften des photostationären Gleichgewichtszustandes von NOx in den beiden Waldgebieten werden in dieser Arbeit verglichen. Des Weiteren ermöglicht der umfangreiche Datensatz - dieser beinhaltet Messungen von relevanten Spurengasen für die Radikalchemie (OH, HO2), sowie der totalen OH-Reaktivität – das aktuelle Verständnis bezüglich der NOx-Photochemie unter Verwendung von einem Boxmodell, in welches die gemessenen Daten als Randbedingungen eingehen, zu überprüfen und zu verbessern. Während NOx-Konzentrationen in HUMPPA-COPEC 2010 niedriger sind, im Vergleich zu PARADE 2011 und BVOC-Konzentrationen höher, sind die Zyklierungsprozesse von NO und NO2 in beiden Fällen grundlegend verstanden. Die Analyse des photostationären Gleichgewichtszustandes von NOx für die beiden stark unterschiedlichen Messstandorte zeigt auf, dass potenziell unbekannte Prozesse in keinem der beiden Fälle vorhanden sind. Die aktuelle Darstellung der NOx-Chemie wurde für HUMPPA-COPEC 2010 unter Verwendung des chemischen Mechanismus MIM3* simuliert. Die Ergebnisse der Simulation sind konsistent mit den Berechnungen basierend auf dem photostationären Gleichgewichtszustand von NOx.
Resumo:
In Malani and Neilsen (1992) we have proposed alternative estimates of survival function (for time to disease) using a simple marker that describes time to some intermediate stage in a disease process. In this paper we derive the asymptotic variance of one such proposed estimator using two different methods and compare terms of order 1/n when there is no censoring. In the absence of censoring the asymptotic variance obtained using the Greenwood type approach converges to exact variance up to terms involving 1/n. But the asymptotic variance obtained using the theory of the counting process and results from Voelkel and Crowley (1984) on semi-Markov processes has a different term of order 1/n. It is not clear to us at this point why the variance formulae using the latter approach give different results.
Resumo:
While sound and video may capture viewers' attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution is this new type of interactive content, offered in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop multimedia authoring package Macromedia Director into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals. The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper. In the following sections, an overview of the operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services. The next section examines a number of metadata languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
This paper analyses how the internal resources of small- and medium-sized enterprises determine access (learning processes) to technology centres (TCs) or industrial research institutes (innovation infrastructure) in traditional low-tech clusters. These interactions basically represent traded (market-based) transactions, which constitute important sources of knowledge in clusters. The paper addresses the role of TCs in low-tech clusters, and uses semi-structured interviews with 80 firms in a manufacturing cluster. The results point out that producer–user interactions are the most frequent; thus, the higher the sector knowledge-intensive base, the more likely the utilization of the available research infrastructure becomes. Conversely, the sectors with less knowledge-intensive structures, i.e. less absorptive capacity (AC), present weak linkages to TCs, as they frequently prefer to interact with suppliers, who act as transceivers of knowledge. Therefore, not all the firms in a cluster can fully exploit the available research infrastructure, and their AC moderates this engagement. In addition, the existence of TCs is not sufficient since the active role of a firm's search strategies to undertake interactions and conduct openness to available sources of knowledge is also needed. The study has implications for policymakers and academia.
Resumo:
Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)
Resumo:
Retirement from a sport career represents a turning point in the life of a sportsman. The aim of this study was to determine how the process of withdrawal of professional basketball players and the factors that influence it. Using a qualitative methodology, semi-structured interviews were conducted with 6 professional players focusing on their experiences during the process. Analysis of the interviews revealed the need to treat this process from a multidimensional perspective, as several factors interact. The results obtained show that the players assign great importance to the economic, academic and adjustment difficulties. Consequently, we discuss the need for specific assistance programs for these athletes, regardless of their previous professional level.
Resumo:
Electric probes are objects immersed in the plasma with sharp boundaries which collect of emit charged particles. Consequently, the nearby plasma evolves under abrupt imposed and/or naturally emerging conditions. There could be localized currents, different time scales for plasma species evolution, charge separation and absorbing-emitting walls. The traditional numerical schemes based on differences often transform these disparate boundary conditions into computational singularities. This is the case of models using advection-diffusion differential equations with source-sink terms (also called Fokker-Planck equations). These equations are used in both, fluid and kinetic descriptions, to obtain the distribution functions or the density for each plasma species close to the boundaries. We present a resolution method grounded on an integral advancing scheme by using approximate Green's functions, also called short-time propagators. All the integrals, as a path integration process, are numerically calculated, what states a robust grid-free computational integral method, which is unconditionally stable for any time step. Hence, the sharp boundary conditions, as the current emission from a wall, can be treated during the short-time regime providing solutions that works as if they were known for each time step analytically. The form of the propagator (typically a multivariate Gaussian) is not unique and it can be adjusted during the advancing scheme to preserve the conserved quantities of the problem. The effects of the electric or magnetic fields can be incorporated into the iterative algorithm. The method allows smooth transitions of the evolving solutions even when abrupt discontinuities are present. In this work it is proposed a procedure to incorporate, for the very first time, the boundary conditions in the numerical integral scheme. This numerical scheme is applied to model the plasma bulk interaction with a charge-emitting electrode, dealing with fluid diffusion equations combined with Poisson equation self-consistently. It has been checked the stability of this computational method under any number of iterations, even for advancing in time electrons and ions having different time scales. This work establishes the basis to deal in future work with problems related to plasma thrusters or emissive probes in electromagnetic fields.
Resumo:
Cellular senescence is defined by the limited proliferative capacity of normal cultured cells. Immortal cells overcome this regulation and proliferate indefinitively. One step in the immortalization process may be reactivation of telomerase activity, a ribonucleoprotein complex, which, by de novo synthesized telomeric TTAGGG repeats, can prevent shortening of the telomeres. Here we show that immortal human skin keratinocytes, irrespective of whether they were immortalized by simian virus 40, human papillomavirus 16, or spontaneously, as well as cell lines established from human skin squamous cell carcinomas exhibit telomerase activity. Unexpectedly, four of nine samples of intact human skin also were telomerase positive. By dissecting the skin we could show that the dermis and cultured dermal fibroblasts were telomerase negative. The epidermis and cultured skin keratinocytes, however, reproducibly exhibited enzyme activity. By separating different cell layers of the epidermis this telomerase activity could be assigned to the proliferative basal cells. Thus, in addition to hematopoietic cells, the epidermis, another example of a permanently regenerating human tissue, provides a further exception of the hypothesis that all normal human somatic tissues are telomerase deficient. Instead, these data suggest that in addition to contributing to the permanent proliferation capacity of immortal and tumor-derived keratinocytes, telomerase activity may also play a similar role in the lifetime regenerative capacity of normal epidermis in vivo.
Resumo:
As chapas de ligas de alumínio trabalháveis são produzidas atualmente por dois processos, o método de vazamento contínuo conhecido TRC (Twin Roll Continous Casting) ou pelo método tradicional de vazamento de placas DC (Direct Chill). A fabricação de ligas de alumínio pelos dois processos confere características microestruturais diferentes quando comparadas entre si, o que se reflete em suas propriedades. Além disto, ocorrem variações microestruturais ao longo da espessura, especialmente nas chapas produzidas pelo processo TRC. Neste sentido, é importante estudar a evolução microestrutural que ocorre durante o seu processamento e sua influência com relação à resistência à corrosão. Dessa forma foi realizado neste trabalho um estudo comparativo do comportamento de corrosão, bem como das microestruturas do alumínio de alta pureza AA1199 (99,995% Al) e das ligas de alumínio AA1050 (Fe+Si0,5%) e AA4006 (Fe+Si1,8%) produzidas pelos processos industriais de lingotamento contínuo e semi-contínuo. Os resultados obtidos evidenciaram que as microestruturas das ligas AA4006 DC e AA4006 TRC são distintas, sendo observada maior fração volumétrica dos precipitados na liga fabricada pelo processo TRC comparativamente ao DC. Para caracterizar o comportamento de corrosão foram realizados ensaios de Espectroscopia de Impedância Eletroquímica e Polarização Potenciodinâmica, que mostraram a maior resistência à corrosão localizada para a liga fabricada pelo processo TRC em comparação ao processo DC. Além disso, foi verificada, em ordem decrescente, uma maior resistência à corrosão do alumínio AA1050, seguida pela superfície da liga AA4006 e por fim, pelo centro da chapa desta última. Os resultados obtidos por espectroscopia de impedância eletroquímica para as ligas AA4006 fabricadas pelo processo TRC apresentaram melhor desempenho que o processo DC, principalmente em intervalos de 2 a 12 horas de imersão na solução de sulfato de sódio contaminada com íons cloreto. Para tempos de imersão acima de 4 horas foi observado comportamento indutivo em baixas frequências para os dois tipos de processamento investigados, o que foi associado à adsorção de espécies químicas, principalmente íons sulfato e oxigênio, na interface metal/óxido. As curvas de polarização anódica mostraram maior resistência à corrosão localizada para a liga fabricada pelo processo viii TRC em comparação ao processo DC. Este comportamento foi associado às diferentes características microestruturais, observadas para liga AA4006 obtida pelos dois processos.
Resumo:
This paper investigates the nonlinear vibration of imperfect shear deformable laminated rectangular plates comprising a homogeneous substrate and two layers of functionally graded materials (FGMs). A theoretical formulation based on Reddy's higher-order shear deformation plate theory is presented in terms of deflection, mid-plane rotations, and the stress function. A semi-analytical method, which makes use of the one-dimensional differential quadrature method, the Galerkin technique, and an iteration process, is used to obtain the vibration frequencies for plates with various boundary conditions. Material properties are assumed to be temperature-dependent. Special attention is given to the effects of sine type imperfection, localized imperfection, and global imperfection on linear and nonlinear vibration behavior. Numerical results are presented in both dimensionless tabular and graphical forms for laminated plates with graded silicon nitride/stainless steel layers. It is shown that the vibration frequencies are very much dependent on the vibration amplitude and the imperfection mode and its magnitude. While most of the imperfect laminated plates show the well-known hard-spring vibration, those with free edges can display soft-spring vibration behavior at certain imperfection levels. The influences of material composition, temperature-dependence of material properties and side-to-thickness ratio are also discussed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Magnetic resonance imaging has been used to monitor the diffusion of water at 310 K into a series of semi-IPNs of poly(ethyl methacrylate), PEM, and copolymers of 2-hydroxyethyl methacrylate, HEMA, and tetrahydrofurfuryl methacrylate, THFMA. The diffusion was found to be well described by a Fickian kinetic model in the early stages of the water sorption process, and the diffusion coefficients were found to be slightly smaller than those for the copolymers of HEMA and THFMA, P(HEMA-co-THFMA), containing the same mole fraction of HEMA in the matrix. A second stage sorption process was identified in the later stage of water sorption by the PEM/PTHFMA semi-IPN and for the systems containing a P(HEMA-co-THFMA) component with a mole fraction HEMA of 0.6 or less. This was characterized by the presence of Water near the surface of the cylinders with a longer NMR T-2 relaxation time, which would be characteristic of mobile water, such as water present in large pores or surface fissures. The presence of the drug chlorhexidine in the polymer matrixes at a concentration of 5.625 wt % was found not to modify the properties significantly, but the diffusion coefficients for the water sorption were systematically smaller when the drug was present.
Resumo:
The study aimed to examine the factors influencing referral to rehabilitation following traumatic brain injury (TBI) by using social problems theory as a conceptual model to focus on practitioners and the process of decision-making in two Australian hospitals. The research design involved semi-structured interviews with 18 practitioners and observations of 10 team meetings, and was part of a larger study on factors influencing referral to rehabilitation in the same settings. Analysis revealed that referral decisions were influenced primarily by practitioners' selection and their interpretation of clinical and non-clinical patient factors. Further, practitioners generally considered patient factors concurrently during an ongoing process of decision-making, with the combinations and interactions of these factors forming the basis for interpretations of problems and referral justifications. Key patient factors considered in referral decisions included functional and tracheostomy status, time since injury, age, family, place of residence and Indigenous status. However, rate and extent of progress, recovery potential, safety and burden of care, potential for independence and capacity to cope were five interpretative themes, which emerged as the justifications for referral decisions. The subsequent negotiation of referral based on patient factors was in turn shaped by the involvement of practitioners. While multi-disciplinary processes of decision-making were the norm, allied health professionals occupied a central role in referral to rehabilitation, and involvement of medical, nursing and allied health practitioners varied. Finally, the organizational pressures and resource constraints, combined with practitioners' assimilation of the broader efficiency agenda were central factors shaping referral. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
As process management projects have increased in size due to globalised and company-wide initiatives, a corresponding growth in the size of process modeling projects can be observed. Despite advances in languages, tools and methodologies, several aspects of these projects have been largely ignored by the academic community. This paper makes a first contribution to a potential research agenda in this field by defining the characteristics of large-scale process modeling projects and proposing a framework of related issues. These issues are derived from a semi -structured interview and six focus groups conducted in Australia, Germany and the USA with enterprise and modeling software vendors and customers. The focus groups confirm the existence of unresolved problems in business process modeling projects. The outcomes provide a research agenda which directs researchers into further studies in global process management, process model decomposition and the overall governance of process modeling projects. It is expected that this research agenda will provide guidance to researchers and practitioners by focusing on areas of high theoretical and practical relevance.