953 resultados para General-purpose computing
Resumo:
Pathfinder is a performance-game for solo drummer, exploring the synergies between multiple contemporary creative practices. The work navigates between music composition, improvisation, projection/light art and game art. At its heart lies a bespoke electro-acoustic instrument, the augmented drum-kit, used not only to provide the sonic content of the work in real-time, but also as a highly expressive game controller that interacts with an instrument-specific game. The musical instrument offers a much wider range of expressive possibilities, control and tactile feedback in comparison to a traditional general-purpose game controller, and as a result it affords a more diverse and nuanced game play performance. Live electronics, lights, projections and the drum-kit all make up the performance-game’s universe, within which the performer has to explore, adapt, navigate and complete a journey.
Resumo:
Estudos sobre notificação da violência intrafamiliar contra crianças e adolescentes têm suscitado, entre os profissionais, diversas abordagens e perspectivas de interpretação, mostrando a complexidade e amplitude desse fenômeno, tão presente na sociedade. Nesse estudo, apoiando-se em Foucault, defende-se a seguinte tese: O ato de implementação da notificação da violência constitui-se em exercício de poder do denunciante e um ato de resistência contra a sua manutenção. Como objetivo geral do estudo, buscou-se compreender o processo de notificação de violência intrafamiliar contra crianças e adolescentes, no Município do Rio Grande/RS; e como objetivos específicos: analisar as notificações realizadas entre janeiro de 2009 e maio de 2014, em uma instituição de proteção à crianças e adolescentes de Rio Grande/RS; conhecer como os profissionais da saúde tem se fortalecido e encorajado para proceder às notificações de violência contra crianças e adolescentes no Rio Grande/RS. O estudo foi desenvolvido em duas etapas, uma quantitativa, mediante pesquisa documental em 800 prontuários de um Centro de Referência Especializada em Assistência Social (CREAS) do Rio Grande, abertos entre janeiro de 2009 e maio de 2014, enfocando variáveis sociodemográficas das vítimas, agressores e a modalidade de violência e da notificação. Constata-se que o perfil prevalente foi de crianças e adolescentes brancas, do sexo feminino, com idades entre sete e 14 anos, residentes em bairros periféricos. A maioria dos agressores é do sexo masculino, com idades entre 20 e 40 anos, e baixo nível de escolaridade. Identificou-se também a mãe como a principal responsável pelas agressões, seguida do pai e padrasto. Houve o predomínio da violência sexual, física e psicológica. A maioria das notificações encaminhadas aos órgãos de proteção foi realizada pelos familiares, desencadeada, principalmente, pela evidência de sinais fisicos. A etapa qualitativa foi realizada através de entrevista semi-estruturada com profissionais de saúde que notificaram atos de violência. Realizou-se análise textual discursiva dos dados, emergindo duas categorias: Coragem da verdade fortalecida pelo conhecimento e Coragem da verdade: conhecimento de si e cuidado de si. Os profissionais de saúde adotaram a notificação como um exercício de poder frente ao agressor e uma forma de resistência e enfrentamento da violência. No exercício da sua liberdade, procederam a notificação, que se constitui em uma ação ética, especialmente porque se consideram profissionais comprometidos com o bem-estar e proteção de seus pacientes. Foram respeitados todos os procedimentos éticos, a partir da Resolução n. 466/2012.
Resumo:
Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.
Resumo:
Reasoning systems have reached a high degree of maturity in the last decade. However, even the most successful systems are usually not general purpose problem solvers but are typically specialised on problems in a certain domain. The MathWeb SOftware Bus (Mathweb-SB) is a system for combining reasoning specialists via a common osftware bus. We described the integration of the lambda-clam systems, a reasoning specialist for proofs by induction, into the MathWeb-SB. Due to this integration, lambda-clam now offers its theorem proving expertise to other systems in the MathWeb-SB. On the other hand, lambda-clam can use the services of any reasoning specialist already integrated. We focus on the latter and describe first experimnents on proving theorems by induction using the computational power of the MAPLE system within lambda-clam.
Resumo:
Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.
Resumo:
This work was general-purpose, develop a proposal of a theoretical model of decisionmaking with a focus on management of small family farms costs, which enables support for decision making. And the following objectives: i) develop a structured methodology, which allows to form a literary basis to provide scientific support for the implementation of research; ii) develop based on the literature the dimensions and variables of the necessary models to propose an application and iii tooling) to implement the proposed model within the dimensions and variables, and validate every stage background and perform the necessary conclusions to verify the effectiveness of applied model. In terms of methodology, we used a structured methodology, which allowed forming a bibliographic portfolio of 29 articles, and through the research constructs developed, based on an existing model, an activity segmentation model for aid farmer of small family farms in the decision-making process with emphasis on cost management. The model was applied in six family farms in the South West region of Parana and Santa Catarina West. With regard to the search results, it was identified that the model can be applied to the specific context for which it was created. It was also possible to identify that the proposed model was valid and relevant to aid in the management of family farms by identifying, through the targeting of productive activities, investment priorities guided by the balance between managing costs and return activities. Moreover, possible to target the activities of six surveyed properties, demonstrating that the property 02, has the shape of more complex segmentation should be divided into three groups of activities, which can be conducted in parallel without any restrictions between activities. Other properties have the segmentation of the simplest activities, allowing viewing in this way that there are activities of groups that require prioritizing investments. Specifically the property 01 and 04 have the highest priority investment target groups, the most prominent activities of groups representing respectively 49.32% and 47.40%, which are represented by grain production activities on the property 01 and grain production, beef cattle and eggs on the property 04.
Resumo:
The general purpose of this study is to investigate the degree of heavy metal accumulation in hard and soft tissue of sea urchin, and determining these tissues as the most suitable bioindicator for lead and cadmium in the environment of the sampling stations. The way of doing this assessment was MOOPAM. Samples were prepared and classified according to sea urchin organ (soft tissue, hard tissue, Tube feet, Test, Lantern Structure and spines) and then lead and cadmium were measured in them. Result of this study shows that hard tissue is a better index of lead and cadmium than soft tissue. The result of bioaccumulation of lead in the related tissue was found to be in the following order: Soft tissue=21, hard tissue=28.1, Test=20.8, Lantern Structure=20.5 and spines=23.9. The result of bioaccumulation of cadmium in the related tissue was found to be in the following order: Soft tissue=9. 7, hard tissue=5.01, Test=4.2, Lantern Structure=4.06 and spines=5.53.
Resumo:
Abstract: As time has passed, the general purpose programming paradigm has evolved, producing different hardware architectures whose characteristics differ widely. In this work, we are going to demonstrate, through different applications belonging to the field of Image Processing, the existing difference between three Nvidia hardware platforms: two of them belong to the GeForce graphics cards series, the GTX 480 and the GTX 980 and one of the low consumption platforms which purpose is to allow the execution of embedded applications as well as providing an extreme efficiency: the Jetson TK1. With respect to the test applications we will use five examples from Nvidia CUDA Samples. These applications are directly related to Image Processing, as the algorithms they use are similar to those from the field of medical image registration. After the tests, it will be proven that GTX 980 is both the device with the highest computational power and the one that has greater consumption, it will be seen that Jetson TK1 is the most efficient platform, it will be shown that GTX 480 produces more heat than the others and we will learn other effects produced by the existing difference between the architecture of the devices.
Resumo:
The research work is devoted to actual problems of development management of industrial enterprises. The general purpose of this work is the choice and justification of rational enterprise development evaluation model and subsequent application of it for assessment of enterprise development level and also forming of recommendations for enterprise management. Theoretical aspects of development management of enterprises were generalized. The approaches to understanding the essence of development enterprise category and its types were considered. It was investigated the evaluation models of enterprise development, their advantages and disadvantages and the difficulties of their implementation. The requirements for formation of the evaluation system of the enterprise development were summarized. It was determined the features of the formation and application of an Index of Enterprise Development. In the empirical part, data about investigated enterprises was collected from their official websites and also complemented with further data from other statistical websites. The analysis was based on the annual financial statements of companies. To assess the level of enterprise development were chosen model proposed by Feshchur and Samulyak (2010). This model involves the calculation of the Index of Enterprise Development using partial indicators, their reference values and weight. It was conducted an analysis of the development of Ukrainian enterprises that produce sauces. OJSC “LZHK” had the highest value of Index of Enterprise Development, in 2013 and 2015, that consisted 0,78 and 0,76 respectively. In 2014 the highest value for the Index belonged to PJSC “Volynholdinh” and amounted 0,74. OJSC “LZHK” had the highest average value of Index of Enterprise Development by the result of 2013-2015 years, and it consisted 0,70. PJSC “Chumak” had the lowest average value of Index of Enterprise Development obtained the result 0,59. In order to raise the enterprise development level, it was suggested to reduce production costs and staff turnover, increase the involvement of employees.
Resumo:
The general purpose of this work is to describe and analyse the financing phenomenon of crowdfunding and to investigate the relations among crowdfunders, project creators and crowdfunding websites. More specifically, it also intends to describe the profile differences between major crowdfunding platforms, such as Kickstarter and Indiegogo. The findings are supported by literature, gathered from different scientific research papers. In the empirical part, data about Kickstarter and Indiegogo was collected from their websites and also complemented with further data from other statistical websites. For finding out specific information, such as satisfaction of entrepreneurs from both platforms, a satisfaction survey was applied among 200 entrepreneurs from different countries. To identify the profile of users of the Kickstarter and of the Indiegogo platforms, a multivariate analysis was performed, using a Hierarchical Clusters Analysis for each platform under study. Descriptive analysis was used for exploring information about popularity of platforms, average cost and the most popular area of projects, profile of users and future opportunities of platforms. To assess differences between groups, association between variables, and answering to the research hypothesis, an inferential analysis it was applied. The results showed that the Kickstarter and Indiegogo are one of the most popular crowdfunding platforms. Both of them have thousands of users and they are generally satisfied. Each of them uses individual approach for crowdfunders. Despite this, they both could benefit from further improving their services. Furthermore, according the results it was possible to observe that there is a direct and positive relationship between the money needed for the projects and the money collected from the investors for the projects, per platform.
Resumo:
Nowadays entrepreneurship is one of the main key objects in internal policy of each country. More and more women start doing their own business and thus become integral participants of entrepreneurial activities. However, despite of the abundance of various scientific publications, female entrepreneurship is poorly understood phenomenon, which is needed to be carefully scrutinized. The general purpose of this work is to describe and analyse such phenomenon as female entrepreneurship generally in the world and separately and mainly in Belarus. Indeed, it intends to determine the factors that drive women's entrepreneurship in Belarus. The findings are supported by literature, gathered from different scientific researches and actual statistical data. The data used in the empirical part was collected from World Bank Enterprise Surveys and comprises the responses of representatives of 360 companies selected randomly from the population of the Belarus companies. With the help of descriptive statistics and the application of logistic regression simple models to determine which economic, social, fiscal and legal environmental factors impact on female entrepreneurial activity was possible to understand the female involvement in business activities and society of the country.
Resumo:
The past several years have seen the surprising and rapid rise of Bitcoin and other “cryptocurrencies.” These are decentralized peer-to-peer networks that allow users to transmit money, tocompose financial instruments, and to enforce contracts between mutually distrusting peers, andthat show great promise as a foundation for financial infrastructure that is more robust, efficientand equitable than ours today. However, it is difficult to reason about the security of cryptocurrencies. Bitcoin is a complex system, comprising many intricate and subtly-interacting protocol layers. At each layer it features design innovations that (prior to our work) have not undergone any rigorous analysis. Compounding the challenge, Bitcoin is but one of hundreds of competing cryptocurrencies in an ecosystem that is constantly evolving. The goal of this thesis is to formally reason about the security of cryptocurrencies, reining in their complexity, and providing well-defined and justified statements of their guarantees. We provide a formal specification and construction for each layer of an abstract cryptocurrency protocol, and prove that our constructions satisfy their specifications. The contributions of this thesis are centered around two new abstractions: “scratch-off puzzles,” and the “blockchain functionality” model. Scratch-off puzzles are a generalization of the Bitcoin “mining” algorithm, its most iconic and novel design feature. We show how to provide secure upgrades to a cryptocurrency by instantiating the protocol with alternative puzzle schemes. We construct secure puzzles that address important and well-known challenges facing Bitcoin today, including wasted energy and dangerous coalitions. The blockchain functionality is a general-purpose model of a cryptocurrency rooted in the “Universal Composability” cryptography theory. We use this model to express a wide range of applications, including transparent “smart contracts” (like those featured in Bitcoin and Ethereum), and also privacy-preserving applications like sealed-bid auctions. We also construct a new protocol compiler, called Hawk, which translates user-provided specifications into privacy-preserving protocols based on zero-knowledge proofs.
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
Dissertação de mest. em Observação e Análise da Relação Educativa, Faculdade de Ciências Humanas e Sociais, Univ. do Algarve, 2004
Resumo:
O presente trabalho de investigação subordinado ao tema “A integração de Atiradores Especiais num Batalhão de Infantaria” propõe investigar a integração de atiradores especiais numa unidade escalão Batalhão de Infantaria. Para a sua consecução definiu-se como objetivo geral analisar as implicações do emprego de atiradores especiais, no atual ambiente operacional, em apoio às unidades escalão Batalhão de Infantaria. De modo a concretizar o mesmo delimitou-se o trabalho ao estudo dos Batalhões de Infantaria Mecanizados de Rodas, da Brigada de Intervenção, recorrendo-se sempre que necessário aos Batalhões Stryker do Exército dos Estados Unidos da América, dada a sua experiência de combate em diversos teatros de operações. Para a realização deste trabalho estruturou-se um modelo de análise, efetuando-se uma abordagem qualitativa de natureza descritiva. Recorre-se ao método de abordagem hipotético-dedutivo, através de uma conexão descendente iniciada com a descrição do ambiente operacional, o emprego de Batalhões de Infantaria e a utilização de atiradores especiais. Da análise dos resultados verifica-se que o atual ambiente operacional apresenta uma elevada complexidade, existindo uma tendência para os futuros conflitos armados ocorrerem em Estados falhados e em áreas urbanas, existindo uma ameaça predominantemente irregular. Os Batalhões de Infantaria têm assim a necessidade de possuir atiradores especiais, de forma a poderem empregar fogos diretos com precisão. Conclui-se que com a crescente complexidade do ambiente operacional face à tipologia de ameaça, os Batalhões de Infantaria portugueses, que futuramente poderão ser empregues neste tipo de teatros de operações, devem possuir atiradores especiais, que empreguem fogos diretos com precisão a médias distâncias, sendo estes integrados ao nível das Secções de atiradores.