984 resultados para software studies
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In the beginning of its 10th year of existence Facebook has engaged and connected 1.2 billion monthly active users. This article-based dissertation Disconnect.Me – User Engagement and Facebook approaches this engagement from the opposite direction: disconnection. The research articles focus on social media specific phenomena including leaving Facebook, tactical media works such as Web 2.0 SuicideMachine, memorializing dead Facebook users and Facebook trolling. The media theoretical framework for this study is built around affect theory, software studies, biopolitics as well as different critical studies of new media. The argument is that disconnection is a necessary condition of social media connectivity and exploring social media through disconnection – as an empirical phenomenon, future potential and theoretical notion – helps us to understand how users are engaged with social media, its uses and subsequent business models. The results of the study indicate that engagement is a relation that precedes user participation, a notion often used to conceptualize social media. Furthermore, this engagement turns the focus from users’ actions towards the platform and how the platform actively controls users and their behavior. Facebook aims to engage new users and maintain the old ones by renewing its platform and user interface. User engagement with the platform is thus social but also technical and affective. When engaged, the user is positioned to algorithmic connectivity where machinc processes mine user data. This data is but sold also used to affect and engage other users. In the heart of this study is the notion that our networked engagements matter and disconnection can bring us to the current limits of network culture.
Resumo:
In Marxist frameworks “distributive justice” depends on extracting value through a centralized state. Many new social movements—peer to peer economy, maker activism, community agriculture, queer ecology, etc.—take the opposite approach, keeping value in its unalienated form and allowing it to freely circulate from the bottom up. Unlike Marxism, there is no general theory for bottom-up, unalienated value circulation. This paper examines the concept of “generative justice” through an historical contrast between Marx’s writings and the indigenous cultures that he drew upon. Marx erroneously concluded that while indigenous cultures had unalienated forms of production, only centralized value extraction could allow the productivity needed for a high quality of life. To the contrary, indigenous cultures now provide a robust model for the “gift economy” that underpins open source technological production, agroecology, and restorative approaches to civil rights. Expanding Marx’s concept of unalienated labor value to include unalienated ecological (nonhuman) value, as well as the domain of freedom in speech, sexual orientation, spirituality and other forms of “expressive” value, we arrive at an historically informed perspective for generative justice.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The study examines international cooperation in product development in software development organisations. The software industry is known for its global nature and knowledge-intensity, which makes it an interesting setting to examine international cooperation in. Software development processes are increasingly distributed worldwide, but for small or even medium-sized enterprises, typical for the software industry, such distribution of operations is often possible only in association with crossing the company’s boundaries. The strategic decision-making of companies is likely to be affected by the characteristics of the industry, and this includes decisions about cooperation or sourcing. The objective of this thesis is to provide a holistic view on factors affecting decisions about offshore sourcing in software development. Offshore sourcing refers to a cooperative mode of offshoring, where a firm does not establish its own presence in a foreign country, but utilises a local supplier. The study examines product development activities that are distributed across organisational and geographical boundaries. The objective can be divided into two subtopics: general reasons for international cooperation in product development and particular reasons for cooperation between Finnish and Russian companies. The focus is on the strategic rationale at the company level, in particular in small and medium-sized enterprises. The theoretical discourse of the study builds upon the literature on international cooperation and networking, with particular focus on cooperation with foreign suppliers and within product development activities. The resource-based view is also discussed, as heterogeneity and interdependency of the resources possessed by different firms are seen as factors motivating international cooperation. Strategically, sourcing can be used to access resources possessed by an industrial network, to enhance the product development of a firm, or to optimise its cost structure. In order to investigate the issues raised by the theoretical review, two empirical studies on international cooperation in software product development have been conducted. The emphasis of the empirical part of the study is on cooperation between Finnish and Russian companies. The data has been gathered through four case studies on Finnish software development organisations and four case studies on Russian offshore suppliers. Based on the material from the case studies, a framework clarifying and grouping the factors that influence offshore sourcing decisions has been built. The findings indicate that decisions regarding offshore sourcing in software development are far more complex than generally assumed. The framework provides a holistic view on factors affecting decisions about offshore sourcing in software development, capturing the multidimensionality of motives for entering offshore cooperation. Four groups of factors emerged from the data: A) strategy-related aspects, B) aspects related to resources and capabilities, C) organisation-related aspects, and D) aspects related to the entrepreneur or management. By developing a holistic framework of decision factors, the research offers in-depth theoreticalunderstanding of offshore sourcing rationale in product development. From the managerial point of view, the proposed framework sums up the issues that a firm should pay attention to when contemplating product development cooperation with foreign suppliers. Understanding different components of sourcing decisions can lead to improved preconditions for strategising and engaging in offshore cooperation. A thorough decisionmaking process should consider all the possible benefits and risks of product development cooperation carefully.
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.
Resumo:
Based on the empirical evidence that the ratio of email messages in public mailing lists to versioning system commits has remained relatively constant along the history of the Apache Software Foundation (ASF), this paper has as goal to study what can be inferred from such a metric for projects of the ASF. We have found that the metric seems to be an intensive metric as it is independent of the size of the project, its activity, or the number of developers, and remains relatively independent of the technology or functional area of the project. Our analysis provides evidence that the metric is related to the technical effervescence and popularity of project, and as such can be a good candidate to measure its healthy evolution. Other, similar metrics -like the ratio of developer messages to commits and the ratio of issue tracker messages to commits- are studied for several projects as well, in order to see if they have similar characteristics.
Resumo:
The objective of this research is to identify the factors that influence the migration of free software to proprietary software, or vice-versa. The theoretical framework was developed in light of the Diffusion of Innovations Theory (DIT) proposed by Rogers (1976, 1995), and the Unified Theory of Acceptance and Use of Technology (UTAUT) proposed by Venkatesh, Morris, Davis and Davis (2003). The research was structured in two phases: the first phase was exploratory, characterized by adjustments of the revised theory to fit Brazilian reality and the identification of companies that could be the subject of investigation; and the second phase was qualitative, in which case studies were conducted at ArcelorMittal Tubarão (AMT), a private company that migrated from proprietary software (Unix) to free software (Linux), and the city government of Serra, in Espírito Santo state, a public organization that migrated from free software (OpenOffice) to proprietary (MS Office). The results show that software migration decision takes into account factors that go beyond issues involving technical or cost aspects, such as cultural barriers, user rejection and resistance to change. These results underscore the importance of social aspects, which can play a decisive role in the decision regarding software migration and its successful implementation.
Resumo:
For obtaining accurate and reliable gene expression results it is essential that quantitative real-time RT-PCR (qRT-PCR) data are normalized with appropriate reference genes. The current exponential increase in postgenomic studies on the honey bee, Apis mellifera, makes the standardization of qRT-PCR results an important task for ongoing community efforts. For this aim we selected four candidate reference genes (actin, ribosomal protein 49, elongation factor 1-alpha, tbp-association factor) and used three software-based approaches (geNorm, BestKeeper and NormFinder) to evaluate the suitability of these genes as endogenous controls. Their expression was examined during honey bee development, in different tissues, and after juvenile hormone exposure. Furthermore, the importance of choosing an appropriate reference gene was investigated for two developmentally regulated target genes. The results led us to consider all four candidate genes as suitable genes for normalization in A. mellifera. However, each condition evaluated in this study revealed a specific set of genes as the most appropriated ones.
Resumo:
Functional brain imaging techniques such as functional MRI (fMRI) that allow the in vivo investigation of the human brain have been exponentially employed to address the neurophysiological substrates of emotional processing. Despite the growing number of fMRI studies in the field, when taken separately these individual imaging studies demonstrate contrasting findings and variable pictures, and are unable to definitively characterize the neural networks underlying each specific emotional condition. Different imaging packages, as well as the statistical approaches for image processing and analysis, probably have a detrimental role by increasing the heterogeneity of findings. In particular, it is unclear to what extent the observed neurofunctional response of the brain cortex during emotional processing depends on the fMRI package used in the analysis. In this pilot study, we performed a double analysis of an fMRI dataset using emotional faces. The Statistical Parametric Mapping (SPM) version 2.6 (Wellcome Department of Cognitive Neurology, London, UK) and the XBAM 3.4 (Brain Imaging Analysis Unit, Institute of Psychiatry, Kings College London, UK) programs, which use parametric and non-parametric analysis, respectively, were used to assess our results. Both packages revealed that processing of emotional faces was associated with an increased activation in the brain`s visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex, in the cingulate cortex (anterior and posterior cingulate), and in the dorsolateral and ventrolateral prefrontal cortex. However, blood oxygenation level-dependent (BOLD) response in the temporal regions, insula and putamen was evident in the XBAM analysis but not in the SPM analysis. Overall, SPM and XBAM analyses revealed comparable whole-group brain responses. Further Studies are needed to explore the between-group compatibility of the different imaging packages in other cognitive and emotional processing domains. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Oral squamous cell carcinoma (OSCC) accounts for more than 90% of the malignant neoplasms that arise in the mucosa of the upper aerodigestive tract. Recent studies of cleft lip/palate have shown the association of genes involved in cancer. WNT pathway genes have been associated with several types of cancer and recently with cleft lip/palate. To investigate if genes associated with cleft lip/palate were also associated with oral cancer, we genotyped 188 individuals with OSCC and 225 control individuals for markers in AXIN2, AXIN1, GSK3 beta, WNT3A, WNT5A, WNT8A, WNT11, WNT3, and WNT9B. Statistical analysis was performed with PLINK 1.06 software to test for differences in allele frequencies of each polymorphism between cases and controls. We found association of SNPs in GSK3B (p = 0.0008) and WNT11 (p = 0.03) with OSCC. We also found overtransmission of GSK3B haplotypes in OSCC cases. Expression analyses showed up-regulation of WNT3A, GSK3B, and AXIN1 and down-regulation of WNT11 in OSCC in comparison with control tissues (P < 0.001). Additional studies should focus on the identification of potentially functional variants in these genes as contributors to human clefting and oral cancer.
Resumo:
Software and information services (SIS) have become a field of increasing opportunities for international trade due to the worldwide diffusion of a combination of technological and organizational innovations. In several regions, the software industry is organized in clusters, usually referred to as "knowledge cities" because of the growing importance of knowledge-intensive services in their economy. This paper has two primary objectives. First, it raises three major questions related to the attractiveness of different cities in Argentina and Brazil for hosting software companies and to their impact on local development. Second, a new taxonomy is proposed for grouping clusters according to their dominant business segment, ownership pattern and scope of operations. The purpose of this taxonomy is to encourage further studies and provide an exploratory analytical tool for analyzing software clusters.
Finite element studies of the mechanical behaviour of the diaphragm in normal and pathological cases
Resumo:
The diaphragm is a muscular membrane separating the abdominal and thoracic cavities, and its motion is directly linked to respiration. In this study, using data from a 59-year-old female cadaver obtained from the Visible Human Project, the diaphragm is reconstructed and, from the corresponding solid object, a shell finite element mesh is generated and used in several analyses performed with the ABAQUS 6.7 software. These analyses consider the direction of the muscle fibres and the incompressibility of the tissue. The constitutive model for the isotropic strain energy as well as the passive and active strain energy stored in the fibres is adapted from Humphrey's model for cardiac muscles. Furthermore, numerical results for the diaphragmatic floor under pressure and active contraction in normal and pathological cases are presented.