150 resultados para Open-source code
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways, e.g. for payment systems or assisting the lives of elderly or disabled people. Security threats for these devices become more and more dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level and where third-party developers first time have the opportunity to develop kernel-based low-level security tools. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS, holding the greatest market share among all smartphone OSs, was even closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners privacy. Since signature-based approaches mainly detect known malwares, anomaly-based approaches can be a valuable addition to these systems. They base on mathematical algorithms processing data that describe the state of a certain device. For gaining this data, a monitoring client is needed that has to extract usable information (features) from the monitored system. Our approach follows a dual system for analyzing these features. On the one hand, functionality for on-device light-weight detection is provided. But since most algorithms are resource exhaustive, remote feature analysis is provided on the other hand. Having this dual system enables event-based detection that can react to the current detection need. In our ongoing research we aim to investigates the feasibility of light-weight on-device detection for certain occasions. On other occasions, whenever significant changes are detected on the device, the system can trigger remote detection with heavy-weight algorithms for better detection results. In the absence of the server respectively as a supplementary approach, we also consider a collaborative scenario. Here, mobile devices sharing a common objective are enabled by a collaboration module to share information, such as intrusion detection data and results. This is based on an ad-hoc network mode that can be provided by a WiFi or Bluetooth adapter nearly every smartphone possesses.
Resumo:
We introduce the Network Security Simulator (NeSSi2), an open source discrete event-based network simulator. It incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Compared to the predecessor NeSSi, it was extended with a three-tier plugin architecture and a generic network model to shift its focus towards simulation framework for critical infrastructures. We demonstrate the gained adaptability by different use cases
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
The book examines the correlation between Intellectual Property Law – notably copyright – on the one hand and social and economic development on the other. The main focus of the initial overview is on historical, legal, economic and cultural aspects. Building on that, the work subsequently investigates how intellectual property systems have to be designed in order to foster social and economic growth in developing countries and puts forward theoretical and practical solutions that should be considered and implemented by policy makers, legal experts and the Word Intellectual Property Organization (WIPO).
Resumo:
This paper presents the idea of a compendium of process technologies, i.e., a concise but comprehensive collection of techniques for process model analysis that support research on the design, execution, and evaluation of processes. The idea originated from observations on the evolution of process-related research disciplines. Based on these observations, we derive design goals for a compendium. Then, we present the jBPT library, which addresses these goals by means of an implementation of common analysis techniques in an open source codebase.
Resumo:
Privacy is an important component of freedom and plays a key role in protecting fundamental human rights. It is becoming increasingly difficult to ignore the fact that without appropriate levels of privacy, a person’s rights are diminished. Users want to protect their privacy - particularly in “privacy invasive” areas such as social networks. However, Social Network users seldom know how protect their own privacy through online mechanisms. What is required is an emerging concept that provides users legitimate control over their own personal information, whilst preserving and maintaining the advantages of engaging with online services such as Social Networks. This paper reviews “Privacy by Design (PbD)” and shows how it applies to diverse privacy areas. Such an approach will move towards mitigating many of the privacy issues in online information systems and can be a potential pathway for protecting user’s personal information. The research has posed many questions in need of further investigation for different open source distributed Social Networks. Findings from this research will lead to a novel distributed architecture that provides more transparent and accountable privacy for the users of online information systems.
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online.
Resumo:
Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data. Process mining, on the other hand, may provide valuable insight into processes through analysis of events recorded in logs by IT systems, but its primary focus is not on cost implications. In this paper, a framework is proposed which aims to exploit the strengths of both fields in order to better support management decisions on cost control. This is achieved by automatically merging cost data with historical data from event logs for the purposes of monitoring, predicting, and reporting process-related costs. The on-demand generation of accurate, relevant and timely cost reports, in a style akin to reports in the area of management accounting, will also be illustrated. This is achieved through extending the open-source process mining framework ProM.
Resumo:
In recent times, technology has advanced in such a manner that the world can now communicate in means previously never thought possible. Transnational organised crime groups, who have exploited these new technologies as basis for their criminal success, however, have not overlooked this development, growth and globalisation. Law enforcement agencies have been confronted with an unremitting challenge as they endeavour to intercept, monitor and analyse these communications as a means of disrupting the activities of criminal enterprises. The challenge lies in the ability to recognise and change tactics to match an increasingly sophisticated adversary. The use of communication interception technology, such as phone taps or email interception, is a tactic that when used appropriately has the potential to cause serious disruption to criminal enterprises. Despite the research that exists on CIT and TOC, these two bodies of knowledge rarely intersect. This paper builds on current literature, drawing them together to provide a clearer picture of the use of CIT in an enforcement and intelligence capacity. It provides a review of the literature pertaining to TOC, the structure of criminal enterprises and the vulnerability of communication used by these crime groups. Identifying the current contemporary models of policing it reviews intelligence-led policing as the emerging framework for modern policing. Finally, it assesses the literature concerning CIT, its uses within Australia and the limitations and arguments that exist. In doing so, this paper provides practitioners with a clearer picture of the use, barriers and benefits of using CIT in the fight against TOC. It helps to bridge the current gaps in modern policing theory and offers a perspective that can help drive future research.
Resumo:
In order to increase the accuracy of patient positioning for complex radiotherapy treatments various 3D imaging techniques have been developed. MegaVoltage Cone Beam CT (MVCBCT) can utilise existing hardware to implement a 3D imaging modality to aid patient positioning. MVCBCT has been investigated using an unmodified Elekta Precise linac and 15 iView amorphous silicon electronic portal imaging device (EPID). Two methods of delivery and acquisition have been investigated for imaging an anthropomorphic head phantom and quality assurance phantom. Phantom projections were successfully acquired and CT datasets reconstructed using both acquisition methods. Bone, tissue and air were 20 clearly resolvable in both phantoms even with low dose (22 MU) scans. The feasibility of MegaVoltage Cone beam CT was investigated using a standard linac, amorphous silicon EPID and a combination of a free open source reconstruction toolkit as well as custom in-house software written in Matlab. The resultant image quality has 25 been assessed and presented. Although bone, tissue and air were resolvable 2 in all scans, artifacts are present and scan doses are increased when compared with standard portal imaging. The feasibility of MVCBCT with unmodified Elekta Precise linac and EPID has been considered as well as the identification of possible areas for future development in artifact correction techniques to 30 further improve image quality.
Resumo:
Although there are many approaches for developing secure programs, they are not necessarily helpful for evaluating the security of a pre-existing program. Software metrics promise an easy way of comparing the relative security of two programs or assessing the security impact of modifications to an existing one. Most studies in this area focus on high level source code but this approach fails to take compiler-specific code generation into account. In this work we describe a set of object-oriented Java bytecode security metrics which are capable of assessing the security of a compiled program from the point of view of potential information flow. These metrics can be used to compare the security of programs or assess the effect of program modifications on security using a tool which we have developed to automatically measure the security of a given Java bytecode program in terms of the accessibility of distinguished ‘classified’ attributes.
Resumo:
BACKGROUND There is a growing volume of open source ‘education material’ on energy efficiency now available however the Australian government has identified a need to increase the use of such materials in undergraduate engineering education. Furthermore, there is a reported need to rapidly equip engineering graduates with the capabilities in conducting energy efficiency assessments, to improve energy performance across major sectors of the economy. In January 2013, building on several years of preparatory action-research initiatives, the former Department of Industry, Innovation, Climate Change, Science, Research and Tertiary Education (DIICCSRTE) offered $600,000 to develop resources for energy efficiency related graduate attributes, targeting Engineers Australia college disciplines, accreditation requirements and opportunities to address such requirements. PURPOSE This paper discusses a $430,000 successful bid by a university consortium led by QUT and including RMIT, UA, UOW, and VU, to design and pilot several innovative, targeted open-source resources for curriculum renewal related to energy efficiency assessments, in Australian engineering programs (2013-2014), including ‘flat-pack’, ‘media-bites’, ‘virtual reality’ and ‘deep dive’ case study initiatives. DESIGN/ METHOD The paper draws on literature review and lessons learned by the consortium partners in resource development over the last several years to discuss methods for selecting key graduate attributes and providing targeted resources, supporting materials, and innovative delivery options to assist universities deliver knowledge and skills to develop such attributes. This includes strategic industry and key stakeholders engagement. The paper also discusses processes for piloting, validating, peer reviewing, and refining these resources using a rigorous and repeatable approach to engaging with academic and industry colleagues. RESULTS The paper provides an example of innovation in resource development through an engagement strategy that takes advantage of existing networks, initiatives, and funding arrangements, while informing program accreditation requirements, to produce a cost-effective plan for rapid integration of energy efficiency within education. By the conference, stakeholder workshops will be complete. Resources will be in the process of being drafted, building on findings from the stakeholder engagement workshops. Reporting on this project “in progress” provides a significant opportunity to share lessons learned and take on board feedback and input. CONCLUSIONS This paper provides a useful reference document for others considering significant resource development in a consortium approach, summarising benefits and challenges. The paper also provides a basis for documenting the second half of the project, which comprises piloting resources and producing a ‘good practice guide’ for energy efficiency related curriculum renewal.
Resumo:
Software engineers constantly deal with problems of designing, analyzing, and improving process specifications, e.g., source code, service compositions, or process models. Process specifications are abstractions of behavior observed or intended to be implemented in reality which result from creative engineering practice. Usually, process specifications are formalized as directed graphs in which edges capture temporal relations between decisions, synchronization points, and work activities. Every process specification is a compromise between two points: On the one hand engineers strive to operate with less modeling constructs which conceal irrelevant details, while on the other hand the details are required to achieve the desired level of customization for envisioned process scenarios. In our research, we approach the problem of varying abstraction levels of process specifications. Formally, developed abstraction mechanisms exploit the structure of a process specification and allow the generalization of low-level details into concepts of a higher abstraction level. The reverse procedure can be addressed as process specialization.
Resumo:
This research has established a new privacy framework, privacy model, and privacy architecture to create more transparent privacy for social networking users. The architecture is designed into three levels: Business, Data, and Technology, which is based on The Open Group Architecture Framework (TOGAF®). This framework and architecture provides a novel platform for investigating privacy in Social Networks (SNs). This approach mitigates many current SN privacy issues, and leads to a more controlled form of privacy assessment. Ultimately, more privacy will encourage more connections between people across SN services.