889 resultados para mobile environment, peer-to-peer, PeerHood, software security, vulnerabilities
Resumo:
Increase of computational power and emergence of new computer technologies led to popularity of local communications between personal trusted devices. By-turn, it led to emergence of security problems related to user data utilized in such communications. One of the main aspects of the data security assurance is security of software operating on mobile devices. The aim of this work was to analyze security threats to PeerHood, software intended for performing personal communications between mobile devices regardless of underlying network technologies. To reach this goal, risk-based software security testing was performed. The results of the testing showed that the project has several security vulnerabilities. So PeerHood cannot be considered as a secure software. The analysis made in the work is the first step towards the further implementation of PeerHood security mechanisms, as well as taking into account security in the development process of this project.
Resumo:
Social networking and social networking sites have gained popularity among internet users during the past few years. Social networks fulfill the need of users to stay connected to friends and other people interested in the same issues. Combining social networks to the mobile environment is a growing interest of mobile device users as it allows the users to be in their online social community despite their mobility. This thesis highlights the basics of mobile environment, social networking and PeerHood and introduces a new approach of social networking on mobile environment, which is a new concept in mobile social networking. This approach is based on dynamic group discovery in accordance to some common user interests and management in the PeerHood environment. A reference implementation of a social networking application built on top of PeerHood is presented and it is tested and analyzed to understand the social networking on mobile environment and the new concept of dynamic group discovery in it.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
Työssä tutkitaan menetelmiä, käytäntöjä ja oliosuunnittelumalleja jotka johtavat ohjelmistojen koon pienentymiseen. Työssä tutkitaan konkreettisia keinoja ohjelmistojen koon optimointiin Symbian-alustalla. Työ keskityy C++ ohjelmistoihin jotka on suunniteltu toimimaan matkapuhelimissa ja muissa langattomissa laitteissa. Työssä esitellään, analysoidaan ja optimoidaan todellinen, loppukäyttäjille suunnattu, langaton sovellus. Käytetyt optimointimenetelmät sekä saadut tulokset esitellään ja analysoidaan. Esimerkkisovelluksen toteuttamisesta kertyvien kokemusten perusteella esitetään suosituksia langattomaan sovelluskehitykseen. Hyvän teknisen arkkitehtuurisuunnitelman todettiin olevan merkittävässä roolissa. C++ -kielen luokkaperinnän huomattiin yllättäen olevan suurin ohjelmatiedostojen kokoon vaikuttava tekijä Symbian-käyttöjärjestelmässä. Pienten ohjelmien tuottamisessa vaaditaan taitoa ja kurinalaisuutta. Ohjelmistokehittäjien asenteet ovat yleensä suurin este sille. Monet ihmiset eivät vain välitä kirjoittaminen ohjelmiensa koosta.
Resumo:
Peer-reviewed
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
The use of electronic documents is constantly growing and the necessity to implement an ad-hoc eCertificate which manages access to private information is not only required but also necessary. This paper presents a protocol for the management of electronic identities (eIDs), meant as a substitute for the paper-based IDs, in a mobile environment with a user-centric approach. Mobile devices have been chosen because they provide mobility, personal use and high computational complexity. The inherent user-centricity also allows the user to personally manage the ID information and to display only what is required. The chosen path to develop the protocol is to migrate the existing eCert technologies implemented by the Learning Societies Laboratory in Southampton. By comparing this protocol with the analysis of the eID problem domain, a new solution has been derived which is compatible with both systems without loss of features.
Resumo:
Elektroninen kaupankäynti ja pankkipalvelut ovat herättäneet toiminnan jatkuvuuden kannalta erittäin kriittisen kysymyksen siitä, kuinka näitä palveluja pystytään suojaamaan järjestäytynyttä rikollisuutta ja erilaisia hyväksikäyttöjä vastaan.
Resumo:
Context awareness is emerging on mobile devices. Context awareness can be used to improve usability of a mobile device. Context awareness is particularly important on mobile devices due the limitations they have. At first in this work, a literature review on context awareness and mobile environment is made. For aiding context awareness there exist an implementation of a Context Framework for Symbian S60 devices. It provides a possibility for exchanging the contexts inside the device between the client applications of the local Context Framework. The main contribution of this thesis is to design and implement an enhancement to the S60 Context Framework for providing possibility to exchange context over device boundaries. Using the implemented Context Exchange System, the context exchange is neither depending on the type of the context nor the type of the client. In addition, the clients and the contexts can reside on any interconnected device. The usage of the system is independent of the programming language since in addition to using only Symbian C++ function interfaces it can also be utilized using XML scripts. The Meeting Sniffer application, which uses the Context Exchange System, was also developed in this work. Using this application, it is possible to recognize a meeting situation and suggest device profile change to a user.
Resumo:
This diploma thesis has been done to international organization which takes care from the accounting actions of two major companies. In this organization are used three different purchasing tools which are used when new asset master data is wanted to input to SAP R/3- system. The aim of this thesis is to find out how much changing the user interface of one of these three e-procurement programs will affect to overall efficiency in asset accounting. As an addition will be introduced project framework which can be used in future projects and which help to avoid certain steps in the development process. At the moment data needs to be inputted manually with many useless mouse clicks and data needs to be searched from many various resources which slow down the process. Other organization has better tools at the moment than the myOrders system which is under investigation Research was started by exploring the main improvement areas. After this possible defects were traced. Suggested improvements were thought by exploring literature which has been written from usability design and research. Meanwhile also directional calculations from the benefits of the project were done alongside with the analysis of the possible risks and threats. After this NSN IT approved the changes which they thought was acceptable. The next step was to program them into tool and test them before releasing to production environment. The calculations were made also from implemented improvements and compared them to planned ones From whole project was made a framework which can be utilized also to other similar projects. The complete calculation was not possible because of time schedule of the project. Important observation in the project was that efficiency is not improved not only by changing the GUI but also improving processes without any programming. Feedback from end user should be also listened more in development process. End-user is after all the one who knows the best how the program should look like.
Resumo:
The mobile apps market is a tremendous success, with millions of apps downloaded and used every day by users spread all around the world. For apps’ developers, having their apps published on one of the major app stores (e.g. Google Play market) is just the beginning of the apps lifecycle. Indeed, in order to successfully compete with the other apps in the market, an app has to be updated frequently by adding new attractive features and by fixing existing bugs. Clearly, any developer interested in increasing the success of her app should try to implement features desired by the app’s users and to fix bugs affecting the user experience of many of them. A precious source of information to decide how to collect users’ opinions and wishes is represented by the reviews left by users on the store from which they downloaded the app. However, to exploit such information the app’s developer should manually read each user review and verify if it contains useful information (e.g. suggestions for new features). This is something not doable if the app receives hundreds of reviews per day, as happens for the very popular apps on the market. In this work, our aim is to provide support to mobile apps developers by proposing a novel approach exploiting data mining, natural language processing, machine learning, and clustering techniques in order to classify the user reviews on the basis of the information they contain (e.g. useless, suggestion for new features, bugs reporting). Such an approach has been empirically evaluated and made available in a web-‐based tool publicly available to all apps’ developers. The achieved results showed that the developed tool: (i) is able to correctly categorise user reviews on the basis of their content (e.g. isolating those reporting bugs) with 78% of accuracy, (ii) produces clusters of reviews (e.g. groups together reviews indicating exactly the same bug to be fixed) that are meaningful from a developer’s point-‐of-‐view, and (iii) is considered useful by a software company working in the mobile apps’ development market.
Resumo:
Effective comprehension of complex software systems requires understanding of both the individual documents that represent software and the complex relationships that exist within and between documents. Relationships of all kinds play a vital role in a software engineer's comprehension of, and navigation within and between, software documents. User-determined relationships have the additional role of enabling the engineer to create and maintain relational documentation that cannot be generated by tools or derived from other relationships. We argue that for a software development environment to effectively support the understanding of complex software systems, relational navigation must be supported at both the document-focused (intra-document) and relation-focused (inter-document) levels. The need for a relation-focused approach is highlighted by an evaluation of an existing document-focused relational interface. We conclude with the requirements for a relation-focused approach to relational navigation. These requirements focus on the user's perspective when interacting with a collection of related documents. We define the requirements for a software development environment that effectively supports the understanding of the software documents and relationships that define a complex software system.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
The research is concerned with the terminological problems that computer users experience when they try to formulate their knowledge needs and attempt to access information contained in computer manuals or online help systems while building up their knowledge. This is the recognised but unresolved problem of communication between the specialist and the layman. The initial hypothesis was that computer users, through their knowledge of language, have some prior knowledge of the subdomain of computing they are trying to come to terms with, and that language can be a facilitating mechanism, or an obstacle, in the development of that knowledge. Related to this is the supposition that users have a conceptual apparatus based on both theoretical knowledge and experience of the world, and of several domains of special reference related to the environment in which they operate. The theoretical argument was developed by exploring the relationship between knowledge and language, and considering the efficacy of terms as agents of special subject knowledge representation. Having charted in a systematic way the territory of knowledge sources and types, we were able to establish that there are many aspects of knowledge which cannot be represented by terms. This submission is important, as it leads to the realisation that significant elements of knowledge are being disregarded in retrieval systems because they are normally expressed by language elements which do not enjoy the status of terms. Furthermore, we introduced the notion of `linguistic ease of retrieval' as a challenge to more conventional thinking which focuses on retrieval results.
Resumo:
Objective: To evaluate the prevalence of chronic autoimmune thyroiditis (CAT) and iodine-induced hypothyroidism, hyperthyroidism (overt and subclinical). and goiter in a population exposed to excessive iodine intake for 5 years (table salt iodine concentrations: 40-100 mg/kg salt). Design: This was a population-based, cross-sectional study with 1085 participants randomly selected from a metropolitan area in Sao Paulo, Brazil, and conducted during the first semester of 2004. Methods: Thyroid ultrasound examination was performed in all participants and samples of urine and blood were collected from each subject. Serum levels of thyroid-stimulating hormone, free thyroxine, and anti-thyroid peroxidase (TPO) antibodies, urinary iodine concentration. thyroid volume, and thyroid echogenicity were evaluated. We also analyzed table salt iodine concentrations. Results: At the time the study was conducted, table salt iodine concentrations were within the new official limits (20-60 mg/kg salt). Nevertheless, in 45.6%, of the participants, urinary iodine excretion was excessive (above 300 mu g/l) and, in 14.1%, it was higher than 400 mu g/l. The prevalence of CAT (including atrophic thyroiditis) was 16.9% (183/1085), women were more affected than men (21.5 vs 9.1% respectively, P=0.02). Hypothyroidism was detected in 8.0%, (87/1085) of the Population with CAT. Hyperthyroidism was diagnosed in 3.3% of the individuals (36/1085) and goiter was identified in 3.1% (34/1085). Conclusions: Five years of excessive iodine intake by the Brazilian population may have increased the prevalence of CAT and hypothyroidism in subjects genetically predisposed to thyroid autoimmune diseases. Appropriate screening for early detection of thyroid dysfunction may be considered during excessive nutritional iodine intake.