968 resultados para Enterprise Java Open Source Architecture (EJOSA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Null dereferencing is one of the most frequent bugs in Java systems causing programs to crash due to the uncaught NullPointerException. Developers often fix this bug by introducing a guard (i.e., null check) on the potentially-null objects before using them. In this paper we investigate the null checks in 717 open-source Java systems to understand when and why developers introduce null checks. We find that 35 of the if-statements are null checks. A deeper investigation shows that 71 of the checked-for-null objects are returned from method calls. This indicates that null checks have a serious impact on performance and that developers introduce null checks when they use methods that return null.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dannie Jost gave an introductory presentation on the emergence on open hardware phenomena, including synthetic biology and other technological environments at the HEPTech Workshop on Open Hardware on June 13 held at the GSI, Darmstadt (Germany). The workshop was organized by CERN and GSI. This event addressed the OSHW phenomenon and its implications for academia and industry with special attention to knowledge and technology transfer issues. Consideration was given to the various aspects of open source hardware development, and how these are dealt with in academia and industry. Presentations from legal experts, academics, practitioners and business provided input for the discussions and exchange of ideas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dannie Jost gave a presentation outlining some of the challenges to the patent system presented by open source hardware at the "Open Knowledge Festival", under the topic stream treating open design, hardware, manufacturing and making; September 19, 2012; Helsinki, Finland. This topic stream generated considerable discussion, and it serves to educate an audience that is usually very adverse to patents and copyright, and helps the researcher understand the issuing conflicts surrounding emerging technologies, in particular digital technologies, and the maker movement (digitally enabled).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this research paper is to demonstrate how existing performance measurement may be adopted to measure and manage performance in extended enterprises. Design/methodology/approach - The paper reviews the literature in performance measurement and extended enterprises. It explains the collaborative architecture of an extended enterprise and demonstrates this architecture through a case study. A model for measuring and managing performance in extended enterprises is developed using the case study. Findings - The research found that due to structural differences between traditional and extended enterprises, the systems required to measure and manage the performance of extended enterprises, whilst being based upon existing performance measurement frameworks, would be structurally and operationally different. Based on this, a model for measuring and managing performance in extended enterprises is proposed which includes intrinsic and extrinsic inter-enterprise coordinating measures. Research limitations/implications - There are two limitations this research. First, the evidence is based on a single case, thus further cases should be studied to establish the generalisibility of the presented results. Second, the practical limitations of the EE performance measurement model should be established through longitudinal action research. Practical implications - In practice the model proposed requires collaborating organisations to be more open and share critical performance information with one another. This will require change in practices and attitudes. Originality/value - The main contribution this paper makes is that it highlights the structural differences between traditional and collaborative enterprises and specifies performance measurement and management requirements of these collaborative organisations. © Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interpolated data are an important part of the environmental information exchange as many variables can only be measured at situate discrete sampling locations. Spatial interpolation is a complex operation that has traditionally required expert treatment, making automation a serious challenge. This paper presents a few lessons learnt from INTAMAP, a project that is developing an interoperable web processing service (WPS) for the automatic interpolation of environmental data using advanced geostatistics, adopting a Service Oriented Architecture (SOA). The “rainbow box” approach we followed provides access to the functionality at a whole range of different levels. We show here how the integration of open standards, open source and powerful statistical processing capabilities allows us to automate a complex process while offering users a level of access and control that best suits their requirements. This facilitates benchmarking exercises as well as the regular reporting of environmental information without requiring remote users to have specialized skills in geostatistics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents in brief the Bulgarian Digital Mathematical Library BulDML and the Czech Digital Mathematical Library DML-CZ. Both libraries use the open source software DSpace and both are partners in the European Digital Mathematics Library EuDML. We describe their content and metadata schemas; outline the architecture system and overview the statistics of its use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Starting from the Schumpeterian producer-driven understanding of innovation, followed by user-generated solutions and understanding of collaborative forms of co-creation, scholars investigated the drivers and the nature of interactions underpinning success in various ways. Innovation literature has gone a long way, where open innovation has attracted researchers to investigate problems like compatibilities of external resources, networks of innovation, or open source collaboration. Openness itself has gained various shades in the different strands of literature. In this paper the author provides with an overview and a draft evaluation of the different models of open innovation, illustrated with some empirical findings from various fields drawn from the literature. She points to the relevance of transaction costs affecting viable forms of (open) innovation strategies of firms, and the importance to define the locus of innovation for further analyses of different firm and interaction level formations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Marxist frameworks “distributive justice” depends on extracting value through a centralized state. Many new social movements—peer to peer economy, maker activism, community agriculture, queer ecology, etc.—take the opposite approach, keeping value in its unalienated form and allowing it to freely circulate from the bottom up. Unlike Marxism, there is no general theory for bottom-up, unalienated value circulation. This paper examines the concept of “generative justice” through an historical contrast between Marx’s writings and the indigenous cultures that he drew upon. Marx erroneously concluded that while indigenous cultures had unalienated forms of production, only centralized value extraction could allow the productivity needed for a high quality of life. To the contrary, indigenous cultures now provide a robust model for the “gift economy” that underpins open source technological production, agroecology, and restorative approaches to civil rights. Expanding Marx’s concept of unalienated labor value to include unalienated ecological (nonhuman) value, as well as the domain of freedom in speech, sexual orientation, spirituality and other forms of “expressive” value, we arrive at an historically informed perspective for generative justice. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2016 is the outbreak year of the virtual reality industry. In the field of virtual reality, 3D surveying plays an important role. Nowadays, 3D surveying technology has received increasing attention. This project aims to establish and optimize a WebGL three-dimensional broadcast platform combined with streaming media technology. It takes streaming media server and panoramic video broadcast in browser as the application background. Simultaneously, it discusses about the architecture from streaming media server to panoramic media player and analyzing relevant theory problem. This paper focuses on the debugging of streaming media platform, the structure of WebGL player environment, different types of ball model analysis, and the 3D mapping technology. The main work contains the following points: Initially, relay on Easy Darwin open source streaming media server, built a streaming service platform. It can realize the transmission from RTSP stream to streaming media server, and forwards HLS slice video to clients; Then, wrote a WebGL panoramic video player based on Three.js lib with JQuery browser playback controls. Set up a HTML5 panoramic video player; Next, analyzed the latitude and longitude sphere model which from Three.js library according to WebGL rendering method. Pointed out the drawbacks of this model and the breakthrough point of improvement; After that, on the basis of Schneider transform principle, established the Schneider sphere projection model, and converted the output OBJ file to JS file for media player reading. Finally implemented real time panoramic video high precision playing without plugin; At last, I summarized the whole project. Put forward the direction of future optimization and extensible market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este relatório apresenta todo o trabalho desenvolvido na Portugal Telecom Inovação ao longo de 6 meses. Este projeto esteve inserido no produto Medigraf, o qual é uma plataforma de telemedicina, desenvolvida e comercializada pela Portugal Telecom Inovação, destinada a ser integrada em organizações de saúde. Um sistema de informação é um componente muito importante nas organizações de saúde. É através deste que toda a informação referente à organização é processada e comunicada. Para que um novo sistema, a ser incorporado na organização, seja capaz de atingir todas as suas potencialidades é necessário que haja uma integração e uma interoperabilidade total entre o novo sistema e o sistema de informação existente. Torna-se assim indispensável conseguir uma integração entre o Medigraf e o sistema de informação existente nas organizações de saúde. Para isso, é necessário apurar quais os requisitos necessários para haver uma integração e uma partilha de informação entre sistemas heterógenos, explicando o conceito de standards, interoperabilidade e terminologias. O estado da arte revelou que a integração entre sistemas heterogéneos em organizações de saúde é difícil de atingir. Das várias organizações existentes, destaquei a HL7 (Health Level Seven) pelos seus avanços nesta área e pelo desenvolvimento de duas versões de um standard de mediation de mensagens (HL7 v2.x e HL7 v3) com o objetivo de atingir uma interoperabilidade entre sistemas heterógenos. Com o estudo mais aprofundado do standard de mensagens HL7 v3, foi necessário adotar uma arquitetura/topologia de integração de forma a implementar o standard. Neste estudo, destaquei a família de soluções EAI (Enterprise Application Integration) como melhor solução. De modo a implementar o standard HL7 v3 com base na arquitetura escolhida, realizei um estudo sobre os softwares existentes. Desse estudo, resultou a escolha do Mirth Connect como melhor abordagem para implementação de uma interoperabilidade entre o Medigraf e um sistema de informação. Este software atua como um middleware de mediation na comunicação entre sistemas heterogéneos. Selecionei para implementação, dois casos de uso do standard, de modo a demonstrar a sua utilização. Nativamente, o Mirth Connect não suporta a validação das mensagens do standard HL7 v3, suportando apenas HL7 v2.x. O Mirth Connect, sendo um software Open Source, permitiu que eu pudesse desenvolver um método capaz de executar essa validação. O método foi publicado no fórum da Mirth Corporation, possibilitando a sua partilha. No final são tecidas algumas conclusões, referindo o trabalho futuro que pode ser desenvolvido.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.