988 resultados para Semi-Regenerative Process
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Semi-supervised learning techniques have gained increasing attention in the machine learning community, as a result of two main factors: (1) the available data is exponentially increasing; (2) the task of data labeling is cumbersome and expensive, involving human experts in the process. In this paper, we propose a network-based semi-supervised learning method inspired by the modularity greedy algorithm, which was originally applied for unsupervised learning. Changes have been made in the process of modularity maximization in a way to adapt the model to propagate labels throughout the network. Furthermore, a network reduction technique is introduced, as well as an extensive analysis of its impact on the network. Computer simulations are performed for artificial and real-world databases, providing a numerical quantitative basis for the performance of the proposed method.
Resumo:
This article discusses the difficulties dairy farmers face when they decide to install a new type of production on their units. We intend to discuss the nature of the new competencies the farmers will construct in order to install new production ateliers, and to show the complexity of the means they used, the difficulties they face in this process, and the strategies farmers develop in consonance with the practical knowledge of their profession. The method used was Ergonomic Work Analysis, together with semi-structured interviews, done after sessions of observation and work analysis. The results show that it is possible to apprehend a part of the complexity of the process of constructing competencies among dairy farmers, the diversity of kinds of resources they mobilize, integrate and transfer in this construction process that materializes through their activities in the work context.
Resumo:
Semi-supervised learning is a classification paradigm in which just a few labeled instances are available for the training process. To overcome this small amount of initial label information, the information provided by the unlabeled instances is also considered. In this paper, we propose a nature-inspired semi-supervised learning technique based on attraction forces. Instances are represented as points in a k-dimensional space, and the movement of data points is modeled as a dynamical system. As the system runs, data items with the same label cooperate with each other, and data items with different labels compete among them to attract unlabeled points by applying a specific force function. In this way, all unlabeled data items can be classified when the system reaches its stable state. Stability analysis for the proposed dynamical system is performed and some heuristics are proposed for parameter setting. Simulation results show that the proposed technique achieves good classification results on artificial data sets and is comparable to well-known semi-supervised techniques using benchmark data sets.
Resumo:
A broad variety of solid state NMR techniques were used to investigate the chain dynamics in several polyethylene (PE) samples, including ultrahigh molecular weight PEs (UHMW-PEs) and low molecular weight PEs (LMW-PEs). Via changing the processing history, i.e. melt/solution crystallization and drawing processes, these samples gain different morphologies, leading to different molecular dynamics. Due to the long chain nature, the molecular dynamics of polyethylene can be distinguished in local fluctuation and long range motion. With the help of NMR these different kinds of molecular dynamics can be monitored separately. In this work the local chain dynamics in non-crystalline regions of polyethylene samples was investigated via measuring 1H-13C heteronuclear dipolar coupling and 13C chemical shift anisotropy (CSA). By analyzing the motionally averaged 1H-13C heteronuclear dipolar coupling and 13C CSA, the information about the local anisotropy and geometry of motion was obtained. Taking advantage of the big difference of the 13C T1 relaxation time in crystalline and non-crystalline regions of PEs, the 1D 13C MAS exchange experiment was used to investigate the cooperative chain motion between these regions. The different chain organizations in non-crystalline regions were used to explain the relationship between the local fluctuation and the long range motion of the samples. In a simple manner the cooperative chain motion between crystalline and non-crystalline regions of PE results in the experimentally observed diffusive behavior of PE chain. The morphological influences on the diffusion motion have been discussed. The morphological factors include lamellar thickness, chain organization in non-crystalline regions and chain entanglements. Thermodynamics of the diffusion motion in melt and solution crystallized UHMW-PEs is discussed, revealing entropy-controlled features of the chain diffusion in PE. This thermodynamic consideration explains the counterintuitive relationship between the local fluctuation and the long range motion of the samples. Using the chain diffusion coefficient, the rates of jump motion in crystals of the melt crystallized PE have been calculated. A concept of "effective" jump motion has been proposed to explain the difference between the values derived from the chain diffusion coefficients and those in literatures. The observations of this thesis give a clear demonstration of the strong relationship between the sample morphology and chain dynamics. The sample morphologies governed by the processing history lead to different spatial constraints for the molecular chains, leading to different features of the local and long range chain dynamics. The knowledge of the morphological influence on the microscopic chain motion has many implications in our understanding of the alpha-relaxation process in PE and the related phenomena such as crystal thickening, drawability of PE, the easy creep of PE fiber, etc.
Resumo:
Supercritical Emulsion Extraction technology (SEE-C) was proposed for the production of poly-lactic-co-glycolic acid microcarriers. SEE-C operating parameters as pressure, temperature and flow rate ratios were analyzed and the process performance was optimized in terms of size distribution and encapsulation efficiency. Microdevices loaded with bovine serum insulin were produced with different sizes (2 and 3 µm) or insulin charges (3 and 6 mg/g) and with an encapsulation efficiency of 60%. The microcarriers were characterized in terms of insulin release profile in two different media (PBS and DMEM) and the diffusion and degradation constants were also estimated by using a mathematical model. PLGA microdevices were also used in a cultivation of embryonic ventricular myoblasts (cell line H9c2 obtained from rat) in a FBS serum free medium to monitor cell viability and growth in dependence of insulin released. Good cell viability and growth were observed on 3 µm microdevices loaded with 3 mg/g of insulin. PLGA microspheres loaded with growth factors (GFs) were charged into alginate scaffold with human Mesenchimal Steam Cells (hMSC) for bone tissue engineering with the aim of monitoring the effect of the local release of these signals on cells differentiation. These “living” 3D scaffolds were incubated in a direct perfusion tubular bioreactor to enhance nutrient transport and exposing the cells to a given shear stress. Different GFs such as, h-VEGF, h-BMP2 and a mix of two (ratio 1:1) were loaded and alginate beads were recovered from dynamic (tubular perfusion system bioreactor) and static culture at different time points (1st, 7th, 21st days) for the analytical assays such as, live/dead; alkaline phosphatase; osteocalcin; osteopontin and Van Kossa Immunoassay. The immunoassay confirmed always a better cells differentiation in the bioreactor with respect to the static culture and revealed a great influence of the BMP-2 released in the scaffold on cell differentiation.
Resumo:
Die Zielsetzung der Arbeit besteht darin, neue Ansätze zur Herstellung strukturierter Kompositpartikel in wässrigem Medium zu entwickeln, welche als die Bildung genau definierter heterogener Strukturen in Kolloidsystemen angesehen werden können. Im Allgemeinen wurden zwei verschiedene Herangehensweisen entwickelt, die sich aufgrund des Ursprungs der gebildeten heterogenen Strukturen unterscheiden: Heterogenität oder Homogenität. Der Erste Ansatz basiert auf der Aggregation heterogener Phasen zur Bildung strukturierter Kolloidpartikel mit Heterogenität in der zugrunde liegenden Chemie, während der Zweite Ansatz auf der Bildung heterogener Phasen in Kolloidpartikeln aus homogenen Mischungen heraus durch kontrollierte Phasenseparation beruht.rnIm Detail beschäftigt sich der erste Teil der Dissertation mit einer neuen Herstellungsmethode für teilkristalline Komposit-Kolloidpartikel mit hoher Stabilität basierend auf der Aggregation flüssiger Monomertropfen an teilkristalline Polyacrylnitrilpartikel. Nach der Aggregation wurden hochstabile Dispersionen bestehend aus strukturierten, teilkristallinen Kompositpartikeln durch freie radikalische Polymerisation erhalten, während ein direktes Mischen der PAN Dispersionen mit Methacrylat-Polymerdispersionen zur unmittelbaren Koagulation führte. In Abhängigkeit von der Glastemperatur des Methacrylatpolymers führt die anschließende freie radikalische Polymerisation zur Bildung von Rasberry oder Kern-Schale Partikeln. Die auf diese Weise hergestellten Partikel sind dazu in der Lage, kontinuierliche Filme mit eingebetteten teilkristallinen Phasen zu bilden, welche als Sauerstoffbarriere Anwendung finden können.rnDer zweite Teil der Dissertation beschreibt eine neue Methode zur Herstellung strukturierter Duroplast-Thermoplast Komposit-Kolloidpartikel. Die Bildung eines Duroplastnetzwerks mit einer thermoplastischen Hülle wurde in zwei Schritten durch verschiedene, separate Polymerisationsmechanismen erreicht: Polyaddition und freie radikalische Polymerisation. Es wurden stabile Miniemulsionen erhalten, welche aus Bisphenol-F basiertem Epoxidharz, Phenalkamin-basiertem Härter und Vinlymonomere bestehen. Sie wurden durch Ultraschall mit nachfolgender Härtung bei verschiedenen Temperaturen als sogenannte Seed-Emulsionen hergestellt. Weitere Vinylmonomere wurden hinzugegeben und nachfolgend polymerisiert, was zur Bildung von Kern-Schale, beziehungsweise Duroplast-Thermoplast Kolloidpartikeln führte. Dabei findet in beiden Fällen zwischen der duroplastischen und der thermoplastischen Phase eine chemisch induzierte Phasenseparation statt, welche essenziell für die Bildung heterogener Strukturen ist. Die auf diese Weise hergestellten Kompositpartikel sind dazu in der Lage, transparente Filme zu bilden, welche unter geeigneten Bedingungen deutlich verbesserte mechanische Eigenschaften im Vergleich zu reinen Duroplastfilmen bereitstellen.rn
Resumo:
Für das Vermögen der Atmosphäre sich selbst zu reinigen spielen Stickstoffmonoxid (NO) und Stickstoffdioxid (NO2) eine bedeutende Rolle. Diese Spurengase bestimmen die photochemische Produktion von Ozon (O3) und beeinflussen das Vorkommen von Hydroxyl- (OH) und Nitrat-Radikalen (NO3). Wenn tagsüber ausreichend Solarstrahlung und Ozon vorherrschen, stehen NO und NO2 in einem schnellen photochemischen Gleichgewicht, dem „Photostationären Gleichgewichtszustand“ (engl.: photostationary state). Die Summe von NO und NO2 wird deshalb als NOx zusammengefasst. Vorhergehende Studien zum photostationären Gleichgewichtszustand von NOx umfassen Messungen an unterschiedlichsten Orten, angefangen bei Städten (geprägt von starken Luftverschmutzungen), bis hin zu abgeschiedenen Regionen (geprägt von geringeren Luftverschmutzungen). Während der photochemische Kreislauf von NO und NO2 unter Bedingungen erhöhter NOx-Konzentrationen grundlegend verstanden ist, gibt es in ländlicheren und entlegenen Regionen, welche geprägt sind von niedrigeren NOx-Konzetrationen, signifikante Lücken im Verständnis der zugrundeliegenden Zyklierungsprozesse. Diese Lücken könnten durch messtechnische NO2-Interferenzen bedingt sein - insbesondere bei indirekten Nachweismethoden, welche von Artefakten beeinflusst sein können. Bei sehr niedrigen NOx-Konzentrationen und wenn messtechnische NO2-Interferenzen ausgeschlossen werden können, wird häufig geschlussfolgert, dass diese Verständnislücken mit der Existenz eines „unbekannten Oxidationsmittels“ (engl.: unknown oxidant) verknüpft ist. Im Rahmen dieser Arbeit wird der photostationäre Gleichgewichtszustand von NOx analysiert, mit dem Ziel die potenzielle Existenz bislang unbekannter Prozesse zu untersuchen. Ein Gasanalysator für die direkte Messung von atmosphärischem NO¬2 mittels laserinduzierter Fluoreszenzmesstechnik (engl. LIF – laser induced fluorescence), GANDALF, wurde neu entwickelt und während der Messkampagne PARADE 2011 erstmals für Feldmessungen eingesetzt. Die Messungen im Rahmen von PARADE wurden im Sommer 2011 in einem ländlich geprägten Gebiet in Deutschland durchgeführt. Umfangreiche NO2-Messungen unter Verwendung unterschiedlicher Messtechniken (DOAS, CLD und CRD) ermöglichten einen ausführlichen und erfolgreichen Vergleich von GANDALF mit den übrigen NO2-Messtechniken. Weitere relevante Spurengase und meteorologische Parameter wurden gemessen, um den photostationären Zustand von NOx, basierend auf den NO2-Messungen mit GANDALF in dieser Umgebung zu untersuchen. Während PARADE wurden moderate NOx Mischungsverhältnisse an der Messstelle beobachtet (10^2 - 10^4 pptv). Mischungsverhältnisse biogener flüchtige Kohlenwasserstoffverbindungen (BVOC, engl.: biogenic volatile organic compounds) aus dem umgebenden Wald (hauptsächlich Nadelwald) lagen in der Größenordnung 10^2 pptv vor. Die Charakteristiken des photostationären Gleichgewichtszustandes von NOx bei niedrigen NOx-Mischungsverhältnissen (10 - 10^3 pptv) wurde für eine weitere Messstelle in einem borealen Waldgebiet während der Messkampagne HUMPPA-COPEC 2010 untersucht. HUMPPA–COPEC–2010 wurde im Sommer 2010 in der SMEARII-Station in Hyytiälä, Süd-Finnland, durchgeführt. Die charakteristischen Eigenschaften des photostationären Gleichgewichtszustandes von NOx in den beiden Waldgebieten werden in dieser Arbeit verglichen. Des Weiteren ermöglicht der umfangreiche Datensatz - dieser beinhaltet Messungen von relevanten Spurengasen für die Radikalchemie (OH, HO2), sowie der totalen OH-Reaktivität – das aktuelle Verständnis bezüglich der NOx-Photochemie unter Verwendung von einem Boxmodell, in welches die gemessenen Daten als Randbedingungen eingehen, zu überprüfen und zu verbessern. Während NOx-Konzentrationen in HUMPPA-COPEC 2010 niedriger sind, im Vergleich zu PARADE 2011 und BVOC-Konzentrationen höher, sind die Zyklierungsprozesse von NO und NO2 in beiden Fällen grundlegend verstanden. Die Analyse des photostationären Gleichgewichtszustandes von NOx für die beiden stark unterschiedlichen Messstandorte zeigt auf, dass potenziell unbekannte Prozesse in keinem der beiden Fälle vorhanden sind. Die aktuelle Darstellung der NOx-Chemie wurde für HUMPPA-COPEC 2010 unter Verwendung des chemischen Mechanismus MIM3* simuliert. Die Ergebnisse der Simulation sind konsistent mit den Berechnungen basierend auf dem photostationären Gleichgewichtszustand von NOx.
Resumo:
In Malani and Neilsen (1992) we have proposed alternative estimates of survival function (for time to disease) using a simple marker that describes time to some intermediate stage in a disease process. In this paper we derive the asymptotic variance of one such proposed estimator using two different methods and compare terms of order 1/n when there is no censoring. In the absence of censoring the asymptotic variance obtained using the Greenwood type approach converges to exact variance up to terms involving 1/n. But the asymptotic variance obtained using the theory of the counting process and results from Voelkel and Crowley (1984) on semi-Markov processes has a different term of order 1/n. It is not clear to us at this point why the variance formulae using the latter approach give different results.
Resumo:
While sound and video may capture viewers' attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution is this new type of interactive content, offered in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop multimedia authoring package Macromedia Director into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals. The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper. In the following sections, an overview of the operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services. The next section examines a number of metadata languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
This paper analyses how the internal resources of small- and medium-sized enterprises determine access (learning processes) to technology centres (TCs) or industrial research institutes (innovation infrastructure) in traditional low-tech clusters. These interactions basically represent traded (market-based) transactions, which constitute important sources of knowledge in clusters. The paper addresses the role of TCs in low-tech clusters, and uses semi-structured interviews with 80 firms in a manufacturing cluster. The results point out that producer–user interactions are the most frequent; thus, the higher the sector knowledge-intensive base, the more likely the utilization of the available research infrastructure becomes. Conversely, the sectors with less knowledge-intensive structures, i.e. less absorptive capacity (AC), present weak linkages to TCs, as they frequently prefer to interact with suppliers, who act as transceivers of knowledge. Therefore, not all the firms in a cluster can fully exploit the available research infrastructure, and their AC moderates this engagement. In addition, the existence of TCs is not sufficient since the active role of a firm's search strategies to undertake interactions and conduct openness to available sources of knowledge is also needed. The study has implications for policymakers and academia.
Resumo:
Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)
Resumo:
Retirement from a sport career represents a turning point in the life of a sportsman. The aim of this study was to determine how the process of withdrawal of professional basketball players and the factors that influence it. Using a qualitative methodology, semi-structured interviews were conducted with 6 professional players focusing on their experiences during the process. Analysis of the interviews revealed the need to treat this process from a multidimensional perspective, as several factors interact. The results obtained show that the players assign great importance to the economic, academic and adjustment difficulties. Consequently, we discuss the need for specific assistance programs for these athletes, regardless of their previous professional level.