845 resultados para Secure Authentication for Broadcast (DNP3-SAB)
Resumo:
This dissertation examines the compliance and performance of a large sample of faith based (religious) ethical funds - the Shari'ah-compliant equity funds (SEFs), which may be viewed as a form of ethical investing. SEFs screen their investment for compliance with Islamic law, where riba (conventional interest expense), maysir (gambling), gharar (excessive uncertainty), and non-halal (non-ethical) products are prohibited. Using a set of stringent Shari'ah screens similar to those of MSCI Islamic, we first examine the extent to which SEFs comply with the Shari'ah law. Results show that only about 27% of the equities held by SEFs are Shari'ah-compliant. While most of the fund holdings pass the business screens, only about 42% pass the total debt to total assets ratio screen. This finding suggests that, in order to overcome a significant reduction in the investment opportunity, Shari'ah principles are compromised, with SEFs adopting lax screening rules so as to achieve a financial performance. While younger funds and funds that charge higher fees and are domiciled in more Muslim countries are more Shari'ah-compliant, we find little evidence of a positive relationship between fund disclosure of the Shari'ah compliance framework and Shari'ah-compliance. Clearly, Shari'ah compliance remains a major challenge for fund managers and SEF investors should be aware of Shari'ah-compliance risk since the fund managers do not always fulfill their fiduciary obligation, as promised in their prospectus. Employing a matched firm approach for a survivorship free sample of 387 SEFs, we then examine an issue that has been heavily debated in the literature: Does ethical screening reduce investment performance? Results show that it does but only by an average of 0.04% per month if benchmarked against matched conventional funds - this is a relatively small price to pay for religious faith. Cross-sectional regressions show an inverse relationship between Shari'ah compliance and fund performance: every one percentage increase in total compliance decreases fund performance by 0.01% per month. However, compliance fails to explain differences in the performance between SEFs and matched funds. Although SEFs do not generally perform better during crisis periods, further analysis shows evidence of better performance relative to conventional funds only during the recent Global Financial Crisis; the latter is consistent with popular media claims.
Resumo:
Having IT-related capabilities is not enough to secure value from the IT resources and survive in today’s competitive environment. IT resources evolve dynamically and organisations must sustain their existing capabilities to continue to leverage value from their IT resources. Organisations’ IT-related management capabilities are an important source of their competitive advantage. We suggest that organisations can sustain these capabilities through appropriate considerations of resources at the technology-use level. This study suggests that an appropriate organisational design relating to decision rights and work environment, and a congruent reward system can create a dynamic IT-usage environment. This environment will be a vital source of knowledge that could help organisations to sustain their IT-related management capabilities. Analysis of data collected from a field survey demonstrates that this dynamic IT-usage environment, a result of the synergy between complementary factors, helps organisations to sustain their IT-related management capabilities. This study adds an important dimension to understanding why some organisations continue to perform better with their IT resources than others. For practice, this study suggests that organisations need to consider a comprehensive approach to what constitutes their valuable resources.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
Having IT-related capabilities is not enough to secure value from IT resources and survive in today’s competitive environment. IT resources evolve dynamically and firms must sustain their existing capabilities to continue to leverage value from their IT resources. Firm’s human resources are an important IT-related capability, and an important source of their competitive advantage. Using a field survey, this study demonstrates that a dynamic end-user environment, a result of a coordinated change in complementary factors can help sustain firms’ IT-related management capabilities. These factors include an appropriate organizational design to decision rights and work environment and a congruent reward system. This study adds an important dimension in understanding why some firms continue to perform better with their IT resources than others. For practice, this study suggests that a comprehensive approach to what constitutes valuable organizational resources is necessary.
Resumo:
Research has established that firms' IT-related capabilities at a point in time explain IT-related performance differences across firms. IT resources, however, are dynamic, and evolve at an exponential rate. This means we need to understand how to sustain firms' existing capabilities to leverage opportunities offered by new IT resources. Wet suggests a higher-level resource that can sustain firms' existing IT-related capabilities. Second, we report on the development of a valid and reliable measurement instrument for measuring this higher-level resource in four stages, which includes expert feedback and a field test. The validated instrument would be useful in extending the IT business value studies to investigate how firms can sustain their IT-related capabilities. This effort will provide a deeper understanding of how firms can secure sustainable IT-related business value from their acquired IT resources.
Resumo:
Studies continue to report ancient DNA sequences and viable microbial cells that are many millions of years old. In this paper we evaluate some of the most extravagant claims of geologically ancient DNA. We conclude that although exciting, the reports suffer from inadequate experimental setup and insufficient authentication of results. Consequently, it remains doubtful whether amplifiable DNA sequences and viable bacteria can survive over geological timescales. To enhance the credibility of future studies and assist in discarding false-positive results, we propose a rigorous set of authentication criteria for work with geologically ancient DNA.
Resumo:
Work integrated learning (WIL) or professional practice units are recognised as providing learning experiences that help students make successful transitions to professional practice. These units require students to engage in learning in the workplace; to reflect on this learning; and to integrate it with learning at university. However, an analysis of a recent cohort of property economics students at a large urban university provides evidence that there is great variation in work based learning experiences undertaken and that this impacts on students’capacity to respond to assessment tasks which involve critiquing these experiences in the form of reflective reports. This paper highlights the need to recognise the diversity of work based experiences; the impact this has on learning outcomes; and to find more effective and equitable ways of measuring these outcomes. The paper briefly discusses assessing learning outcomes in WIL and then describes the model of WIL in the Faculty of Built Environment and Engineering at the Queensland University of Technology (QUT). The paper elaborates on the diversity of students’ experiences and backgrounds including variations in the length of work experience, placement opportunities and conditions of employment.For example, the analysis shows that students with limited work experience often have difficulty critiquing this work experience and producing high level reflective reports. On the other hand students with extensive, discipline relevant work experience can be frustrated by assessment requirements that do not take their experience into account. Added to this the Global Financial Crisis (GFC) has restricted both part time and full time placement opportunities for some students. These factors affect students’ capacity to a) secure a relevant work experience, b) reflect critically on the work experiences and c) appreciate the impact the overall experience can have on their learning outcomes and future professional opportunities. Our investigation highlights some of the challenges faced in implementing effective and equitable approaches across diverse student cohorts. We suggest that increased flexibility in assessment requirements and increased feedback from industry may help address these challenges.
Resumo:
In the medical and healthcare arena, patients‟ data is not just their own personal history but also a valuable large dataset for finding solutions for diseases. While electronic medical records are becoming popular and are used in healthcare work places like hospitals, as well as insurance companies, and by major stakeholders such as physicians and their patients, the accessibility of such information should be dealt with in a way that preserves privacy and security. Thus, finding the best way to keep the data secure has become an important issue in the area of database security. Sensitive medical data should be encrypted in databases. There are many encryption/ decryption techniques and algorithms with regard to preserving privacy and security. Currently their performance is an important factor while the medical data is being managed in databases. Another important factor is that the stakeholders should decide more cost-effective ways to reduce the total cost of ownership. As an alternative, DAS (Data as Service) is a popular outsourcing model to satisfy the cost-effectiveness but it takes a consideration that the encryption/ decryption modules needs to be handled by trustworthy stakeholders. This research project is focusing on the query response times in a DAS model (AES-DAS) and analyses the comparison between the outsourcing model and the in-house model which incorporates Microsoft built-in encryption scheme in a SQL Server. This research project includes building a prototype of medical database schemas. There are 2 types of simulations to carry out the project. The first stage includes 6 databases in order to carry out simulations to measure the performance between plain-text, Microsoft built-in encryption and AES-DAS (Data as Service). Particularly, the AES-DAS incorporates implementations of symmetric key encryption such as AES (Advanced Encryption Standard) and a Bucket indexing processor using Bloom filter. The results are categorised such as character type, numeric type, range queries, range queries using Bucket Index and aggregate queries. The second stage takes the scalability test from 5K to 2560K records. The main result of these simulations is that particularly as an outsourcing model, AES-DAS using the Bucket index shows around 3.32 times faster than a normal AES-DAS under the 70 partitions and 10K record-sized databases. Retrieving Numeric typed data takes shorter time than Character typed data in AES-DAS. The aggregation query response time in AES-DAS is not as consistent as that in MS built-in encryption scheme. The scalability test shows that the DBMS reaches in a certain threshold; the query response time becomes rapidly slower. However, there is more to investigate in order to bring about other outcomes and to construct a secured EMR (Electronic Medical Record) more efficiently from these simulations.
Resumo:
A deeper understanding on two aspects of use of IT resources in organisations is important to ensure sustainable investment in these IT resources. The first is how to leverage the IT resources to attain its maximum value. We discussed this aspect of use of IT resources in part 1 of this series. This discussion suggested a complementary approach as a first stage of IT business value creation, and dynamic capabilities approach to secure sustainable IT-related business value from the IT resources. The second important aspect of IT business value is where to evaluate IT-related business value in the organisations value chains. This understanding is important for organisations to ensure appropriate accountability of the investment and management of IT resources. We address this issue in this second part of the two part series.
Resumo:
A Cooperative Collision Warning System (CCWS) is an active safety techno- logy for road vehicles that can potentially reduce traffic accidents. It provides a driver with situational awareness and early warnings of any possible colli- sions through an on-board unit. CCWS is still under active research, and one of the important technical problems is safety message dissemination. Safety messages are disseminated in a high-speed mobile environment using wireless communication technology such as Dedicated Short Range Communication (DSRC). The wireless communication in CCWS has a limited bandwidth and can become unreliable when used inefficiently, particularly given the dynamic nature of road traffic conditions. Unreliable communication may significantly reduce the performance of CCWS in preventing collisions. There are two types of safety messages: Routine Safety Messages (RSMs) and Event Safety Messages (ESMs). An RSM contains the up-to-date state of a vehicle, and it must be disseminated repeatedly to its neighbouring vehicles. An ESM is a warning message that must be sent to all the endangered vehi- cles. Existing RSM and ESM dissemination schemes are inefficient, unscalable, and unable to give priority to vehicles in the most danger. Thus, this study investigates more efficient and scalable RSM and ESM dissemination schemes that can make use of the context information generated from a particular traffic scenario. Therefore, this study tackles three technical research prob- lems, vehicular traffic scenario modelling and context information generation, context-aware RSM dissemination, and context-aware ESM dissemination. The most relevant context information in CCWS is the information about possible collisions among vehicles given a current vehicular traffic situation. To generate the context information, this study investigates techniques to model interactions among multiple vehicles based on their up-to-date motion state obtained via RSM. To date, there is no existing model that can represent interactions among multiple vehicles in a speciffic region and at a particular time. The major outcome from the first problem is a new interaction graph model that can be used to easily identify the endangered vehicles and their danger severity. By identifying the endangered vehicles, RSM and ESM dis- semination can be optimised while improving safety at the same time. The new model enables the development of context-aware RSM and ESM dissemination schemes. To disseminate RSM efficiently, this study investigates a context-aware dis- semination scheme that can optimise the RSM dissemination rate to improve safety in various vehicle densities. The major outcome from the second problem is a context-aware RSM dissemination protocol. The context-aware protocol can adaptively adjust the dissemination rate based on an estimated channel load and danger severity of vehicle interactions given by the interaction graph model. Unlike existing RSM dissemination schemes, the proposed adaptive scheme can reduce channel congestion and improve safety by prioritising ve- hicles that are most likely to crash with other vehicles. The proposed RSM protocol has been implemented and evaluated by simulation. The simulation results have shown that the proposed RSM protocol outperforms existing pro- tocols in terms of efficiency, scalability and safety. To disseminate ESM efficiently, this study investigates a context-aware ESM dissemination scheme that can reduce unnecessary transmissions and deliver ESMs to endangered vehicles as fast as possible. The major outcome from the third problem is a context-aware ESM dissemination protocol that uses a multicast routing strategy. Existing ESM protocols use broadcast rout- ing, which is not efficient because ESMs may be sent to a large number of ve- hicles in the area. Using multicast routing improves efficiency because ESMs are sent only to the endangered vehicles. The endangered vehicles can be identified using the interaction graph model. The proposed ESM protocol has been implemented and evaluated by simulation. The simulation results have shown that the proposed ESM protocol can prevent potential accidents from occurring better than existing ESM protocols. The context model and the RSM and ESM dissemination protocols can be implemented in any CCWS development to improve the communication and safety performance of CCWS. In effect, the outcomes contribute to the realisation of CCWS that will ultimately improve road safety and save lives.
Resumo:
The globalized nature of modern society has generated a number of pressures that impact internationally on countries’ policies and practices of science education. Among these pressures are key issues of health and environment confronting global science, global economic control through multinational capitalism, comparative and competitive international testing of student science achievement, and the desire for more humane and secure international society. These are not all one-way pressures and there is evidence of both more conformity in the intentions and practices of science education and of a greater appreciation of how cultural differences, and the needs of students as future citizens can be met. Hence while a case for economic and competitive subservience of science education can be made, the evidence for such narrowing is countered by new initiatives that seek to broaden its vision and practices. The research community of science education has certainly widened internationally and this generates many healthy exchanges, although cultural styles of education other than Western ones are still insufficiently recognized. The dominance of English language within these research exchanges is, however, causing as many problems as it solves. Science education, like education as a whole, is a strongly cultural phenomenon, and this provides a healthy and robust buffer to the more negative effects of globalization
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
In a commercial environment, it is advantageous to know how long it takes customers to move between different regions, how long they spend in each region, and where they are likely to go as they move from one location to another. Presently, these measures can only be determined manually, or through the use of hardware tags (i.e. RFID). Soft biometrics are characteristics that can be used to describe, but not uniquely identify an individual. They include traits such as height, weight, gender, hair, skin and clothing colour. Unlike traditional biometrics, soft biometrics can be acquired by surveillance cameras at range without any user cooperation. While these traits cannot provide robust authentication, they can be used to provide identification at long range, and aid in object tracking and detection in disjoint camera networks. In this chapter we propose using colour, height and luggage soft biometrics to determine operational statistics relating to how people move through a space. A novel average soft biometric is used to locate people who look distinct, and these people are then detected at various locations within a disjoint camera network to gradually obtain operational statistics
Resumo:
Information communication and technology (ICT) systems are almost ubiquitous in the modern world. It is hard to identify any industry, or for that matter any part of society, that is not in some way dependent on these systems and their continued secure operation. Therefore the security of information infrastructures, both on an organisational and societal level, is of critical importance. Information security risk assessment is an essential part of ensuring that these systems are appropriately protected and positioned to deal with a rapidly changing threat environment. The complexity of these systems and their inter-dependencies however, introduces a similar complexity to the information security risk assessment task. This complexity suggests that information security risk assessment cannot, optimally, be undertaken manually. Information security risk assessment for individual components of the information infrastructure can be aided by the use of a software tool, a type of simulation, which concentrates on modelling failure rather than normal operational simulation. Avoiding the modelling of the operational system will once again reduce the level of complexity of the assessment task. The use of such a tool provides the opportunity to reuse information in many different ways by developing a repository of relevant information to aid in both risk assessment and management and governance and compliance activities. Widespread use of such a tool allows the opportunity for the risk models developed for individual information infrastructure components to be connected in order to develop a model of information security exposures across the entire information infrastructure. In this thesis conceptual and practical aspects of risk and its underlying epistemology are analysed to produce a model suitable for application to information security risk assessment. Based on this work prototype software has been developed to explore these concepts for information security risk assessment. Initial work has been carried out to investigate the use of this software for information security compliance and governance activities. Finally, an initial concept for extending the use of this approach across an information infrastructure is presented.
Resumo:
With the continued development of renewable energy generation technologies and increasing pressure to combat the global effects of greenhouse warming, plug-in hybrid electric vehicles (PHEVs) have received worldwide attention, finding applications in North America and Europe. When a large number of PHEVs are introduced into a power system, there will be extensive impacts on power system planning and operation, as well as on electricity market development. It is therefore necessary to properly control PHEV charging and discharging behaviors. Given this background, a new unit commitment model and its solution method that takes into account the optimal PHEV charging and discharging controls is presented in this paper. A 10-unit and 24-hour unit commitment (UC) problem is employed to demonstrate the feasibility and efficiency of the developed method, and the impacts of the wide applications of PHEVs on the operating costs and the emission of the power system are studied. Case studies are also carried out to investigate the impacts of different PHEV penetration levels and different PHEV charging modes on the results of the UC problem. A 100-unit system is employed for further analysis on the impacts of PHEVs on the UC problem in a larger system application. Simulation results demonstrate that the employment of optimized PHEV charging and discharging modes is very helpful for smoothing the load curve profile and enhancing the ability of the power system to accommodate more PHEVs. Furthermore, an optimal Vehicle to Grid (V2G) discharging control provides economic and efficient backups and spinning reserves for the secure and economic operation of the power system