944 resultados para Requirements engineering


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vehicular accidents are one of the deadliest safety hazards and accordingly an immense concern of individuals and governments. Although, a wide range of active autonomous safety systems, such as advanced driving assistance and lane keeping support, are introduced to facilitate safer driving experience, these stand-alone systems have limited capabilities in providing safety. Therefore, cooperative vehicular systems were proposed to fulfill more safety requirements. Most cooperative vehicle-to-vehicle safety applications require relative positioning accuracy of decimeter level with an update rate of at least 10 Hz. These requirements cannot be met via direct navigation or differential positioning techniques. This paper studies a cooperative vehicle platform that aims to facilitate real-time relative positioning (RRP) among adjacent vehicles. The developed system is capable of exchanging both GPS position solutions and raw observations using RTCM-104 format over vehicular dedicated short range communication (DSRC) links. Real-time kinematic (RTK) positioning technique is integrated into the system to enable RRP to be served as an embedded real-time warning system. The 5.9 GHz DSRC technology is adopted as the communication channel among road-side units (RSUs) and on-board units (OBUs) to distribute GPS corrections data received from a nearby reference station via the Internet using cellular technologies, by means of RSUs, as well as to exchange the vehicular real-time GPS raw observation data. Ultimately, each receiving vehicle calculates relative positions of its neighbors to attain a RRP map. A series of real-world data collection experiments was conducted to explore the synergies of both DSRC and positioning systems. The results demonstrate a significant enhancement in precision and availability of relative positioning at mobile vehicles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

eHealth systems promise enviable benefits and capabilities for healthcare delivery. However, the technologies that make these capabilities possible introduce undesirable drawbacks such as information security related threats, which need to be appropriately addressed. Lurking in these threats are information privacy concerns. Addressing them has proven to be difficult because they often conflict with information access requirements of healthcare providers. Therefore, it is important to achieve an appropriate balance between these requirements. We contend that information accountability (IA) can achieve this balance. In this paper, we introduce accountable-eHealth (AeH) systems, which are eHealth systems that utilise IA as a measure of information privacy. We discuss how AeH system protocols can successfully achieve the aforementioned balance of requirements. As a means of implementation feasibility, we compare characteristics of AeH systems with Australia’s Personally Controlled Electronic Health Record (PCEHR) sys-tem and identify similarities and highlight the differences and the impact those differences would have to the eHealth domain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Medical research represents a substantial departure from conventional medical care. Medical care is patient-orientated, with decisions based on the best interests and/or wishes of the person receiving the care. In contrast, medical research is future-directed. Primarily it aims to contribute new knowledge about illness or disease, or new knowledge about interventions, such as drugs, that impact upon some human condition. Current State and Territory laws and research ethics guidelines in Australia relating to the review of medical research appropriately acknowledge that the functions of medical care and medical research differ. Prior to a medical research project commencing, the study must be reviewed and approved by a Human Research Ethics Committee (HREC). For medical research involving incompetent adults, some jurisdictions require an additional, independent safeguard by way of tribunal or court approval of medical research protocols. This extra review process reflects the uncertainty of medical research involvement, and the difficulties surrogate decision-makers of incompetent adults face in making decisions about others, and deliberating about the risks and benefits of research involvement. Parents of children also face the same difficulties when making decisions about their child’s research involvement. However, unlike the position concerning incompetent adults, there are no similar safeguards under Australian law in relation to the approval of medical research involving children. This column questions why this discrepancy exists with a view to generating further dialogue on the topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, problems are described which are related to the ergonomic assessment of vehicle package design in vehicle systems engineering. The traditional approach, using questionnaire techniques for a subjective assessment of comfort related to package design, is compared to a biomechanical approach. An example is given for ingress design. The biomechanical approach is based upon objective postural data. The experimental setup for the study is described and methods used for the biomechanical analysis are explained. Because the biomechanic assessment requires not only a complex experimental setup but also time consuming data processing, a systematic reduction and preparation of biomechanic data for classification with an Artificial Neural Network significantly improves the economy of the biomechanical method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The existence of Macroscopic Fundamental Diagram (MFD), which relates space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. One of the key requirements for well-defined MFD is the homogeneity of the area-wide traffic condition with links of similar properties, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take the impact of drivers’ behaviour and information provision into account, which has a significant impact on simulation outputs. This research aims to demonstrate the effect of dynamic information provision on network performance by employing the MFD as a measurement. A microscopic simulation, AIMSUN, is chosen as an experiment platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers different scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance with respect to the MFD shape as well as other indicators, such as total travel time. This study confirmed the impact of information provision on the MFD shape, and addressed the usefulness of the MFD for measuring the dynamic information provision benefit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Highway infrastructure development typically requires major capital input. Unless planned properly, such requirements can cause serious financial constraints for investors. The push for sustainability adds a new dimension to the complexity of evaluating highway projects. Finding environmentally and socially responsible solutions for highway construction will improve its potential for acceptance by the society and in many instances the infrastructure's life span. Even so, the prediction and determination of a project's long-term financial viability can be a precarious exercise. Existing studies in this area have not indicated details of how to identify and deal with costs incurred in pursuing sustainability measures in highway infrastructure. This paper provides insights into the major challenges of implementing sustainability in highway project development in terms of financial concerns and obligations. It discusses the results from recent research through a literature study and a questionnaire survey of key industry stakeholders involved in highway infrastructure development. The research identified critical cost components relating to sustainability measures based on perspectives of industry stakeholders. All stakeholders believe sustainability related costs are an integral part of the decision making. However, the importance rating of these costs is relative to each stakeholder's core business objectives. This will influence the way these cost components are dealt with during the evaluation of highway investment alternatives and financial implications. This research encourages positive thinking among the highway infrastructure practitioners about sustainability. It calls for the construction industry to maximise sustainability deliverables while ensuring financial viability over the life cycle of highway infrastructure projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Policy makers increasingly recognise that an educated workforce with a high proportion of Science, Technology, Engineering and Mathematics (STEM) graduates is a pre-requisite to a knowledge-based, innovative economy. Over the past ten years, the proportion of first university degrees awarded in Australia in STEM fields is below the global average and continues to decrease from 22.2% in 2002 to 18.8% in 2010 [1]. These trends are mirrored by declines between 20% and 30% in the proportions of high school students enrolled in science or maths. These trends are not unique to Australia but their impact is of concern throughout the policy-making community. To redress these demographic trends, QUT embarked upon a long-term investment strategy to integrate education and research into the physical and virtual infrastructure of the campus, recognising that expectations of students change as rapidly as technology and learning practices change. To implement this strategy, physical infrastructure refurbishment/re-building is accompanied by upgraded technologies not only for learning but also for research. QUT’s vision for its city-based campuses is to create vibrant and attractive places to learn and research and to link strongly to the wider surrounding community. Over a five year period, physical infrastructure at the Gardens Point campus was substantially reconfigured in two key stages: (a) a >$50m refurbishment of heritage-listed buildings to encompass public, retail and social spaces, learning and teaching “test beds” and research laboratories and (b) destruction of five buildings to be replaced by a $230m, >40,000m2 Science and Engineering Centre designed to accommodate retail, recreation, services, education and research in an integrated, coordinated precinct. This landmark project is characterised by (i) self-evident, collaborative spaces for learning, research and social engagement, (ii) sustainable building practices and sustainable ongoing operation and; (iii) dynamic and mobile re-configuration of spaces or staffing to meet demand. Innovative spaces allow for transformative, cohort-driven learning and the collaborative use of space to prosecute joint class projects. Research laboratories are aggregated, centralised and “on display” to the public, students and staff. A major visualisation space – the largest multi-touch, multi-user facility constructed to date – is a centrepiece feature that focuses on demonstrating scientific and engineering principles or science oriented scenes at large scale (e.g. the Great Barrier Reef). Content on this visualisation facility is integrated with the regional school curricula and supports an in-house schools program for student and teacher engagement. Researchers are accommodated in a combined open-plan and office floor-space (80% open plan) to encourage interdisciplinary engagement and cross-fertilisation of skills, ideas and projects. This combination of spaces re-invigorates the on-campus experience, extends educational engagement across all ages and rapidly enhances research collaboration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information accountability is seen as a mode of usage control on the Web. Due to its many dimensions, information accountability has been expressed in various ways by computer scientists to address security and privacy in recent times. Information accountability is focused on how users participate in a system and the underlying policies that govern the participation. Healthcare is a domain in which the principles of information accountability can be utilised well. Modern health information systems are Internet based and the discipline is called eHealth. In this paper, we identify and discuss the goals of accountability systems and present the principles of information accountability. We characterise those principles in eHealth and discuss them contextually. We identify the current impediments to eHealth in terms of information privacy issues of eHealth consumers together with information usage requirements of healthcare providers and show how information accountability can be used in a healthcare context to address these needs. The challenges of implementing information accountability in eHealth are also discussed in terms of our efforts thus far.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research proposes a method for identifying user expertise in contemporary Information Systems (IS). It also proposes and develops a model for evaluating expertise. The aim of this study was to offer a common instrument that addresses the requirements of a contemporary Information System in a holistic way. This study demonstrates the application of the expertise construct in Information System evaluations, and shows that users of different expertise levels evaluate systems differently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose This article reports on a research project that explored social media best practice in the public library sector. Design/methodology/approach The primary research approach for the project was case study. Two organisations participated in case studies that involved interviews, document analysis, and social media observation. Findings The two case study organisations use social media effectively to facilitate participatory networks, however, there have been challenges surrounding its implementation in both organisations. Challenges include negotiating requirements of governing bodies and broader organisational environments, and managing staff reluctance around the implementations. As social media use continues to grow and libraries continue to take up new platforms, social media must be considered to be another service point of the virtual branch, and indeed, for the library service as a whole. This acceptance of social media as being core business is critical to the successful implementation of social media based activities. Practical implications The article provides an empirically grounded discussion of best practice and the conditions that support it. The findings are relevant for information organisations across all sectors and could inform the development of policy and practice in other organisations. This paper contributes to the broader dialogue around best practice in participatory service delivery and social media use in library and information organisations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Critical-sized osteochondral defects are clinically challenging, with limited treatment options available. By engineering osteochondral grafts using a patient's own cells and osteochondral scaffolds designed to facilitate cartilage and bone regeneration, osteochondral defects may be treated with less complications and better long-term clinical outcomes. Scaffolds can influence the development and structure of the engineered tissue, and there is an increased awareness that osteochondral tissue engineering concepts need to take the in vivo complexities into account in order to increase the likelihood of successful osteochondral tissue repair. The developing trend in osteochondral tissue engineering is the utilization of multiphasic scaffolds to recapitulate the multiphasic nature of the native tissue. Cartilage and bone have different structural, mechanical, and biochemical microenvironments. By designing osteochondral scaffolds with tissue-specific architecture, it may be possible to enhance osteochondral repair within shorter timeframe. While there are promising in vivo outcomes using multiphasic approaches, functional regeneration of osteochondral constructs still remains a challenge. In this review, we provide an overview of in vivo osteochondral repair studies that have taken place in the past three years, and define areas which needs improvement in future studies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a method for making highly porous biodegradable scaffold that may ultimately be used for tissue engineering. Poly(L-lactic-co-1-caprolactone) acid (70:30) (PLCL) scaffold was produced using the solvent casting/leaching out method, which entails dissolving the polymer and adding a porogen that is then leached out by immersing the scaffold in distillated water. Tensile tests were performed for three types of scaffolds, namely pre-wetted, dried, and UV-irradiated scaffolds and their mechanical properties were measured. The prewetted PLCL scaffold possessed a modulus of elasticity 0.92+0.09 MPa, a tensile strength of 0.12+0.03 MPa and an ultimate strain of 23+5.3%. No significant differences in the modulus elasticity, tensile strength, nor ultimate strain were found between the pre-wetted, dried, and UV irradiated scaffolds. The PLCL scaffold was seeded by human fibroblasts in order to evaluate its biocompatibility by Alamar bluew assays. After 10 days of culture, the scaffolds showed good biocompatibility and allowed cell proliferation. However, the fibroblasts stayed essentially at the surface. This study shows the possibility to use the PLCL scaffold in dynamic mechanical conditions for tissue engineering