887 resultados para Redundant Manipulator
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
Production processes and work organization in the cultural industries have been little discussed. For this reason, the study focuses on the production phases and the division of labor in technical and artistic branches in Argentine soap operas. There are six branches: production, direction, photography, art, sound and edition. We explain the branches, the workers involved and their function and activities. This research is based on a communicational perspective, the Political Economy of Communication and recovers contributions of the Sociology of Labour. From this combination, we attempt to provide elements of analysis to understand the functioning and organisation of daily television series. In the same way, we examine the creative work, the types of work redundant or random, the division of labour and the economies of time. The methodological approach is qualitative. In this way, the examination is based on the production of interviews with key actors of the sector and the documentary and bibliographical survey so as to systematize the data for the research.
Resumo:
The article examines developments in the marketisation and privatisation of the English National Health Service, primarily since 1997. It explores the use of competition and contracting out in ancillary services and the levering into public services of private finance for capital developments through the Private Finance Initiative. A substantial part of the article examines the repeated restructuring of the health service as a market in clinical services, initially as an internal market but subsequently as a market increasing opened up to private sector involvement. Some of the implications of market processes for NHS staff and for increased privatisation are discussed. The article examines one episode of popular resistance to these developments, namely the movement of opposition to the 2011 health and social care legislative proposals. The article concludes with a discussion of the implications of these system reforms for the founding principles of the NHS and the sustainability of the service.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
Ce mémoire présente 2 types de méthodes pour effectuer la réorientation d’un robot sériel en chute libre en utilisant les mouvements internes de celui-ci. Ces mouvements sont prescrits à partir d’algorithmes de planification de trajectoire basés sur le modèle dynamique du robot. La première méthode tente de réorienter le robot en appliquant une technique d’optimisation locale fonctionnant avec une fonction potentielle décrivant l’orientation du système, et la deuxième méthode applique des fonctions sinusoïdales aux articulations pour réorienter le robot. Pour tester les performances des méthodes en simulation, on tente de réorienter le robot pour une configuration initiale et finale identiques où toutes les membrures sont alignées mais avec le robot ayant complété une rotation de 180 degrés sur lui-même. Afin de comparer les résultats obtenus avec la réalité, un prototype de robot sériel plan flottant possédant trois membrures et deux liaisons rotoïdes est construit. Les expérimentations effectuées montrent que le prototype est capable d’atteindre les réorientations prescrites si peu de perturbations extérieures sont présentes et ce, même si le contrôle de l’orientation est effectué en boucle ouverte.
Show de Mamulengos de Heraldo Lins: construções e transformações de um espetáculo na cultura popular
Resumo:
The theater of puppets is one of the many expressions of popular culture which is marked by ongoing constructions and transformations in its symbolic representations as well as its characters and performances. In the city of Natal/RN, there is a manipulator called Heraldo Lins, an artist who operates such puppets, and has been performing his puppet since 1992. Lins has his own look at how he produces his performances and seeks to adjust his puppets to social and rentable contexts. Lins‟s performances are tailor-made in accordance with the request of his customers, as he makes up the passages and lines of his puppets according to his audience. This research aimed to study how the Heraldo Lins Mamulengos Show is built, especially its changes. We note that Lins chooses to dismantle the symbolic values of the tradition in the regular puppet theater once he adapts to modern patterns, placing himself between the traditional puppet theater and the cultural industry. The work in camp was made through a methodological focused in a participative observation and an audiovisual registry
Resumo:
O bin picking é um processo de grande interesse na indústria, uma vez que permite maior automatização, aumento da capacidade de produção e redução dos custos. Este tem vindo a evoluir bastante ao longo dos anos e essa evolução fez com que sistemas de perceção 3D começassem a ser implementados. Este trabalho tem como principal objetivo desenvolver um sistema de bin picking usando apenas perceção 3D. O sistema deve ser capaz de determinar a posição e orientação de objetos com diferentes formas e tamanhos, posicionados aleatoriamente numa superfície de trabalho. Os objetos utilizados para fazer os testes experimentais, são esferas, cilindros e prismas, uma vez que abrangem as formas geométricas existentes em muitos produtos submetidos a bin picking. Após a identi cação e seleção do objeto a apanhar, o manipulador deve autonomamente posicionar-se para fazer a aproximação e recolha do mesmo. A aquisição de dados é feita através de uma câmara Kinect. Dos dados recebidos apenas são trabalhados os referentes à profundidade, centrando-se assim este trabalho na análise e tratamento de nuvem de pontos. O sistema desenvolvido cumpre com os objetivos estabelecidos. Consegue localizar e apanhar objetos em várias posições e orientações. Além disso apresenta uma velocidade de processamento compatível com a aplicação em causa.
Resumo:
Cette thèse propose de développer des mécanismes déployables pour applications spatiales ainsi que des modes d’actionnement permettant leur déploiement et le contrôle de l’orientation en orbite de l’engin spatial les supportant. L’objectif étant de permettre le déploiement de surfaces larges pour des panneaux solaires, coupoles de télécommunication ou sections de station spatiale, une géométrie plane simple en triangle est retenue afin de pouvoir être assemblée en différents types de surfaces. Les configurations à membrures rigides proposées dans la littérature pour le déploiement de solides symétriques sont optimisées et adaptées à l’expansion d’une géométrie ouverte, telle une coupole. L’optimisation permet d’atteindre un ratio d’expansion plan pour une seule unité de plus de 5, mais présente des instabilités lors de l’actionnement d’un prototype. Le principe de transmission du mouvement d’un étage à l’autre du mécanisme est revu afin de diminuer la sensibilité des performances du mécanisme à la géométrie de ses membrures internes. Le nouveau modèle, basé sur des courroies crantées, permet d’atteindre des ratios d’expansion plans supérieurs à 20 dans certaines configurations. L’effet des principaux facteurs géométriques de conception est étudié afin d’obtenir une relation simple d’optimisation du mécanisme plan pour adapter ce dernier à différents contextes d’applications. La géométrie identique des faces triangulaires de chaque surface déployée permet aussi l’empilement de ces faces pour augmenter la compacité du mécanisme. Une articulation spécialisée est conçue afin de permettre le dépliage des faces puis leur déploiement successivement. Le déploiement de grandes surfaces ne se fait pas sans influencer lourdement l’orientation et potentiellement la trajectoire de l’engin spatial, aussi, différentes stratégies de contrôle de l’orientation novatrices sont proposées. Afin de tirer profit d’une grande surface, l’actionnement par masses ponctuelles en périphérie du mécanisme est présentée, ses équations dynamiques sont dérivées et simulées pour en observer les performances. Celles-ci démontrent le potentiel de cette stratégie de réorientation, sans obstruction de l’espace central du satellite de base, mais les performances restent en deçà de l’effet d’une roue d’inertie de masse équivalente. Une stratégie d’actionnement redondant par roue d’inertie est alors présentée pour différents niveaux de complexité de mécanismes dont toutes les articulations sont passives, c’est-à-dire non actionnées. Un mécanisme à quatre barres plan est simulé en boucle fermée avec un contrôleur simple pour valider le contrôle d’un mécanisme ciseau commun. Ces résultats sont étendus à la dérivation des équations dynamiques d’un mécanisme sphérique à quatre barres, qui démontre le potentiel de l’actionnement par roue d’inertie pour le contrôle de la configuration et de l’orientation spatiale d’un tel mécanisme. Un prototype à deux corps ayant chacun une roue d’inertie et une seule articulation passive les reliant est réalisé et contrôlé grâce à un suivi par caméra des modules. Le banc d’essai est détaillé, ainsi que les défis que l’élimination des forces externes ont représenté dans sa conception. Les résultats montrent que le système est contrôlable en orientation et en configuration. La thèse se termine par une étude de cas pour l’application des principaux systèmes développés dans cette recherche. La collecte de débris orbitaux de petite et moyenne taille est présentée comme un problème n’ayant pas encore eu de solution adéquate et posant un réel danger aux missions spatiales à venir. L’unité déployable triangulaire entraînée par courroies est dupliquée de manière à former une coupole de plusieurs centaines de mètres de diamètre et est proposée comme solution pour capturer et ralentir ces catégories de débris. Les paramètres d’une mission à cette fin sont détaillés, ainsi que le potentiel de réorientation que les roues d’inertie permettent en plus du contrôle de son déploiement. Près de 2000 débris pourraient être retirés en moins d’un an en orbite basse à 819 km d’altitude.
Resumo:
Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced.
Resumo:
Policymakers make many demands of our schools to produce academic success. At the same time, community organizations, government agencies, faith-based institutions, and other groups often are providing support to students and their families, especially those from high-poverty backgrounds, that are meant to impact education but are often insufficient, uncoordinated, or redundant. In many cases, these institutions lack access to schools and school leaders. What’s missing from the dominant education reform discourse is a coordinated education-focused approach that mobilizes community assets to effectively improve academic and developmental outcomes for students. This study explores how education-focused comprehensive community change initiatives (CCIs) that utilize a partnership approach are organized and sustained. In this study, I examine three research questions: 1. Why and how do school system-level community change initiative (CCI) partnerships form? 2. What are the organizational, financial, and political structures that support sustainable CCIs? What, in particular, are their connections to the school systems they seek to impact? 3. What are the leadership functions and structures found within CCIs? How are leadership functions distributed across schools and agencies within communities? To answer these questions, I used a cross-case study approach that employed a secondary data analysis of data that were collected as part of a larger research study sponsored by a national organization. The original study design included site visits and extended interviews with educators, community leaders and practitioners about community school initiatives, one type of CCI. This study demonstrates that characteristics of sustained education-focused CCIs include leaders that are critical to starting the CCIs and are willing to collaborate across institutions, a focus on community problems, building on previous efforts, strategies to improve service delivery, a focus on education and schools in particular, organizational arrangements that create shared leadership and ownership for the CCI, an intermediary to support the initial vision and collaborative leadership groups, diversified funding approaches, and political support. These findings add to the literature about the growing number of education-focused CCIs. The study’s primary recommendation—that institutions need to work across boundaries in order to sustain CCIs organizationally, financially, and politically—can help policymakers as they develop new collaborative approaches to achieving educational goals.
Resumo:
Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.
Resumo:
bbd18 is a differentially expressed Borrelia burgdorferi gene that is transcribed at almost undetectable levels in spirochetes grown in vitro but dramatically upregulated during tick infection. The gene also displays low yet detectable expression at various times in tissues of murine hosts. As the gene product bears no homology to known proteins, its biological significance remains enigmatic. To understand the gene function, we created isogenic bbd18-deletion mutants as well as genetically-complemented isolates from an infectious wild-type B. burgdorferi strain. Compared to parental isolates, bbd18 mutants - but not complemented spirochetes - displayed slower in vitro growth. The bbd18 mutants also reflect significantly reduced ability to persist or remain undetectable both in immunocompetent and SCID mice, yet were able to survive in ticks. This suggests BBD18 function is essential in mammalian hosts but redundant in the arthropod vector. Notably, although bbd18 expression and in vitro growth defects are restored in the complemented isolates, their phenotype is similar to the mutants - being unable to persist in mice but able to survive in ticks. Despite low expression in cultured wild-type B. burgdorferi, bbd18 deletion downregulated several genes. Interestingly, expression of some, including ospD and bbi39, could be complemented, while that of others could not be restored via bbd18 re-expression. Correspondingly, bbd18 mutants displayed altered production of several proteins, and similar to RNA levels, some were restored in the bbd18 complement and others not. To understand how bbd18 deletion results in apparently permanent and noncomplementable phenotypic defects, we sought to genetically disturb the DNA topology surrounding the bbd18 locus without deleting the gene. Spirochetes with an antibiotic cassette inserted downstream of the gene, between bbd17 and bbd18, were significantly attenuated in mice, while a similar upstream insertion, between bbd18 and bbd19, did not affect infectivity, suggesting that an unidentified cis element downstream of bbd18 may encode a virulence-associated factor critical for infection.
Resumo:
The history of comitology – the system of implementation committees that control the Commission in the execution of delegated powers – has been characterised by institutional tensions. The crux of these tensions has often been the role of the European Parliament and its quest to be granted powers equal to those of the Council. Over time this tension has been resolved through a series of inter-institutional agreements and Comitology Decisions, essentially giving the Parliament incremental increases in power. This process came to a head with the 2006 Comitology reform and the introduction of the regulatory procedure with scrutiny (RPS). After just over three years of experience with the RPS procedure, and having revised the entire acquis communautaire, the Treaty of Lisbon made has made it redundant through the creation of Delegated Acts (Article 290 TFEU), which gives the Parliament equal rights of oversight. This article aims to evaluate the practical implications that Delegated Acts will entail for the Parliament, principally by using the four years of experience with the RPS to better understand the challenges ahead. This analysis will be of interest to those following the study of comitology, formal and informal interinstitutional relations, and also to practitioners who will have to work with Delegated Acts in the future.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.