555 resultados para generative Fertigung
Resumo:
French Feminism has little to do with feminism in France. While in the U.S. this now canonical body of work designates almost exclusively the work of three theorists—Hélène Cixous, Luce Irigaray, and Julia Kristeva—in France, these same thinkers are actually associated with the rejection of feminism. If some scholars have on this basis passionately denounced French Feminism as an American invention, there exists to date no comprehensive analysis of that invention or of its effects. Why did theorists who were at best marginal to feminist thought and political practice in France galvanize feminist scholars working in the United States? Why does French Feminism provoke such an intense affective response in France to this date? Drawing on the fields of feminist and queer studies, literary studies, and history, “Inventing ‘French Feminism:’ A Critical History” offers a transnational account of the emergence and impact of one of U.S. academic feminism’s most influential bodies of work. The first half of the dissertation argues that, although French Feminism has now been dismissed for being biologically essentialist and falsely universal, feminists working in the U.S. academy of the 1980s, particularly feminist literary critics and postcolonial feminist critics, deployed the work of Cixous, Irigaray, and Kristeva to displace what they perceived as U.S. feminist literary criticism’s essentialist reliance on the biological sex of the author and to challenge U.S. academic feminism’s inattention to racial differences between women. French Feminism thus found traction among feminist scholars to the extent that it was perceived as addressing some of U.S. feminism’s most pressing political issues. The second half of the dissertation traces French feminist scholars’ vehement rejection of French Feminism to an affectively charged split in the French women’s liberation movement of the 1970s and shows that this split has resulted in an entrenched opposition between sexual difference and materialist feminism, an opposition that continues to structure French feminist debates to this day. “Inventing ‘French Feminism:’ A Critical History” ends by arguing that in so far as the U.S. invention of French Feminism has contributed to the emergence of U.S. queer theory, it has also impeded its uptake in France. Taken as a whole, this dissertation thus implicitly argues that the transnational circulation of ideas is simultaneously generative and disabling.
Resumo:
With the popularization of GPS-enabled devices such as mobile phones, location data are becoming available at an unprecedented scale. The locations may be collected from many different sources such as vehicles moving around a city, user check-ins in social networks, and geo-tagged micro-blogging photos or messages. Besides the longitude and latitude, each location record may also have a timestamp and additional information such as the name of the location. Time-ordered sequences of these locations form trajectories, which together contain useful high-level information about people's movement patterns.
The first part of this thesis focuses on a few geometric problems motivated by the matching and clustering of trajectories. We first give a new algorithm for computing a matching between a pair of curves under existing models such as dynamic time warping (DTW). The algorithm is more efficient than standard dynamic programming algorithms both theoretically and practically. We then propose a new matching model for trajectories that avoids the drawbacks of existing models. For trajectory clustering, we present an algorithm that computes clusters of subtrajectories, which correspond to common movement patterns. We also consider trajectories of check-ins, and propose a statistical generative model, which identifies check-in clusters as well as the transition patterns between the clusters.
The second part of the thesis considers the problem of covering shortest paths in a road network, motivated by an EV charging station placement problem. More specifically, a subset of vertices in the road network are selected to place charging stations so that every shortest path contains enough charging stations and can be traveled by an EV without draining the battery. We first introduce a general technique for the geometric set cover problem. This technique leads to near-linear-time approximation algorithms, which are the state-of-the-art algorithms for this problem in either running time or approximation ratio. We then use this technique to develop a near-linear-time algorithm for this
shortest-path cover problem.
Resumo:
This thesis engages black critical thought on the human and its contemporary iterations in posthumanism and transhumanism. It articulates five categories of analysis: displace, interrupt, disrupt, expand, and wither. Each is meant to allude to the generative potential in different iterations of black thought that engages the human. Working through Sylvia Wynter’s theories of the rise of Man-as-human in particular, the project highlights how black thought on the human displaces the uncritical whiteness of posthumanist thought. It argues that Afrofuturism has the potential to interrupt the linear progression from human to posthuman and that Octavia Butler’s Fledgling proffers a narrative of race as a technology that disrupts the presumed post-raciality of posthumanism and transhumanism. It then contends that Katherine McKittrick’s rearticulation of the Promise of Science can be extended to incorporate the promise of science fiction. In so doing, it avers that a more curated conversation between McKittrick and Wynter, one already ongoing, and Octavia Butler, through Mind of My Mind from her Patternist series, expands our notions of the human as a category even at the risk of seeing it wither as a politic or praxis. It ends on a speculative note meant to imagine the possibilities within the promise of science fiction.
Resumo:
This proposal is a non-quantitative study based on a corpus of real data which offers a principled account of the translation strategies employed in the translation of English film titles into Spanish in terms of cognitive modeling. More specifically, we draw on Ruiz de Mendoza and Galera’s (2014) work on what they term content (or low-level) cognitive operations, based on either ‘stands for’ or ‘identity’ relations, in order to investigate possible motivating factors for translations which abide by oblique procedures, i.e. for non-literal renderings of source titles. The present proposal is made in consonance with recent findings within the framework of Cognitive Linguistics (Samaniego 2007), which evidence that this linguistic approach can fruitfully address some relevant issues in Translation Studies, the most outstanding for our purposes being the exploration of the cognitive operations which account for the use of translation strategies (Rojo and Ibarretxe-Antuñano 2013: 10), mainly expansion and reduction operations, parameterization, echoing, mitigation and comparison by contrast. This fits in nicely with a descriptive approach to translation and particularly with skopos theory, whose main aim consists in achieving functionally adequate renderings of source texts.
Resumo:
This paper is a study about the way in which se structures are represented in 20 verb entries of nine dictionaries of Spanish language. There is a large number of these structures and they are problematic for native and non native speakers. Verbs of the analysis are middle-high frequency and, in the most part of the cases, very polysemous, and this allows to observe interconnections between the different se structures and the different meanings of each verb. Data of the lexicographic analysis are cross-checked with corpus analysis of the same units. As a result, it is observed that there is a large variety in the data which are offered in each dictionary and in the way they are offered, inter and intradictionary. The reasons range from the theoretical overall of each Project to practical performance. This leads to the conclusion that it is necessary to further progress in the dictionary model it is being handled, in order to offer lexico-grammatical phenomenon such as se verbs in an accurate, clear and exhaustive way.
Resumo:
Este editorial establece las coordenadas de contenido del presente monográfico donde se sugieren dos rutas de lectura y formas posibles de desplazamiento. Una de éstas rutas muestra las asimetrías, las formas de exclusión y al mismo tiempo las formas de apropiación de las tecnologías desde América Latina y España. La segunda opta por trabajos provocadores que sugieren nuevas metáforas y aproximaciones conceptuales para repensar la contemporaneidad.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
This chapter develops a more comprehensive theory of positive identity construction by explicating proposed mechanisms for constructing and sustaining positive individual identities. The chapter offers a broad, illustrative sampling of mechanisms for positive identity construction that are grounded in various theoretical traditions within identity scholarship. Four classical theories of identity—social identity theory, identity theory, narrative-as-identity, and identity work—offer perspectives on the impetus and mechanisms for positive identity construction. The Dutton et al. (2010) typology of positive identity is revisited to highlight those sources of positivity that each classical theory explains how to enhance. As a next step in research, positive organizational scholarship (POS) scholars and identity scholars are encouraged to examine the conditions under which increasing the positivity of an identity is associated with generative social outcomes (e.g., engaging in prosocial practices, being invested in others’ positive identity development, and deepening mutual understanding of the complex, multifaceted nature of identity).
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Humanities Computing gave rise to the Digital Humanities, which brought considerations of a wider scope of the digital turn to humanities research. Increasingly, the area is understood to include the field of design, exemplified by definitions that describe the Digital Humanities as a “generative enterprise”. We suggest that design contributes not only to the making of digital artefacts. Design practiced with the aim to generate new knowledge constitues a research method. Design research contributes to the Digital Humanities expertise in addressing complex problems and methods for making the knowledge that is generated during a design process explicit.
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
Computer games are significant since they embody our youngsters’ engagement with contemporary culture, including both play and education. These games rely heavily on visuals, systems of sign and expression based on concepts and principles of Art and Architecture. We are researching a new genre of computer games, ‘Educational Immersive Environments’ (EIEs) to provide educational materials suitable for the school classroom. Close collaboration with subject teachers is necessary, but we feel a specific need to engage with the practicing artist, the art theoretician and historian. Our EIEs are loaded with multimedia (but especially visual) signs which act to direct the learner and provide the ‘game-play’ experience forming semiotic systems. We suggest the hypothesis that computer games are a space of deconstruction and reconstruction (DeRe): When players enter the game their physical world and their culture is torn apart; they move in a semiotic system which serves to reconstruct an alternate reality where disbelief is suspended. The semiotic system draws heavily on visuals which direct the players’ interactions and produce motivating gameplay. These can establish a reconstructed culture and emerging game narrative. We have recently tested our hypothesis and have used this in developing design principles for computer game designers. Yet there are outstanding issues concerning the nature of the visuals used in computer games, and so questions for contemporary artists. Currently, the computer game industry employs artists in a ‘classical’ role in production of concept sketches, storyboards and 3D content. But this is based on a specification from the client which restricts the artist in intellectual freedom. Our DeRe hypothesis places the artist at the generative centre, to inform the game designer how art may inform our DeRe semiotic spaces. This must of course begin with the artists’ understanding of DeRe in this time when our ‘identities are becoming increasingly fractured, networked, virtualized and distributed’ We hope to persuade artists to engage with the medium of computer game technology to explore these issues. In particular, we pose several questions to the artist: (i) How can particular ‘periods’ in art history be used to inform the design of computer games? (ii) How can specific artistic elements or devices be used to design ‘signs’ to guide the player through the game? (iii) How can visual material be integrated with other semiotic strata such as text and audio?
Resumo:
Teacher education researchers appear generally not well equipped to maximise a range of dissemination strategies, and remain largely separated from the policy implications of their research. How teacher education researchers address this issue and communicate their research to a wider public audience is more important than ever to consider within a global political discourse where teacher education researchers appear frustrated that their findings should, but do not, make a difference; and where the research they produce is often marginalised. This paper seeks to disrupt the widening gap between teacher education researchers and policy-makers by looking at the issue from ‘both sides’. The paper examines policy–research tensions and the critique of teacher education researchers and then outlines some of the key findings from an Australian policy-maker study. Recommendations are offered as a way for teacher education researchers to begin to mobilise a new set of generative strategies to draw from.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.