917 resultados para usage-based
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.
Resumo:
Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.
Resumo:
Appearance-based localization is increasingly used for loop closure detection in metric SLAM systems. Since it relies only upon the appearance-based similarity between images from two locations, it can perform loop closure regardless of accumulated metric error. However, the computation time and memory requirements of current appearance-based methods scale linearly not only with the size of the environment but also with the operation time of the platform. These properties impose severe restrictions on longterm autonomy for mobile robots, as loop closure performance will inevitably degrade with increased operation time. We present a set of improvements to the appearance-based SLAM algorithm CAT-SLAM to constrain computation scaling and memory usage with minimal degradation in performance over time. The appearance-based comparison stage is accelerated by exploiting properties of the particle observation update, and nodes in the continuous trajectory map are removed according to minimal information loss criteria. We demonstrate constant time and space loop closure detection in a large urban environment with recall performance exceeding FAB-MAP by a factor of 3 at 100% precision, and investigate the minimum computational and memory requirements for maintaining mapping performance.
Resumo:
Background Cancer can be a distressing experience for cancer patients and carers, impacting on psychological, social, physical and spiritual functioning. However, health professionals often fail to detect distress in their patients due to time constraints and a lack of experience. Also, with the focus on the patient, carer needs are often overlooked. This study investigated the acceptability of brief distress screening with the Distress Thermometer (DT) and Problem List (PL) to operators of a community-based telephone helpline, as well as to cancer patients and carers calling the service. Methods Operators (n = 18) monitored usage of the DT and PL with callers (cancer patients/carers, >18 years, and English-speaking) from September-December 2006 (n = 666). The DT is a single item, 11-point scale to rate level of distress. The associated PL identifies the cause of distress. Results The DT and PL were used on 90% of eligible callers, most providing valid responses. Benefits included having an objective, structured and consistent means for distress screening and triage to supportive care services. Reported challenges included apparent inappropriateness of the tools due to the nature of the call or level of caller distress, the DT numeric scale, and the level of operator training. Conclusions We observed positive outcomes to using the DT and PL, although operators reported some challenges. Overcoming these challenges may improve distress screening particularly by less experienced clinicians, and further development of the PL items and DT scale may assist with administration. The DT and PL allow clinicians to direct/prioritise interventions or referrals, although ongoing training and support is critical in distress screening.
The increased popularity of mopeds and motor scooters : exploring usage patterns and safety outcomes
Resumo:
Increased use of powered two-wheelers (PTWs) often underlies increases in the number of reported crashes, promoting research into PTW safety. PTW riders are overrepresented in crash and injury statistics relative to exposure and, as such, are considered vulnerable road users. PTW use has increased substantially over the last decade in many developed countries. One such country is Australia, where moped and scooter use has increased at a faster rate than motorcycle use in recent years. Increased moped use is particularly evident in the State of Queensland which is one of four Australian jurisdictions where moped riding is permitted for car licence holders and a motorcycle licence is not required. A moped is commonly a small motor scooter and is limited to a maximum design speed of 50 km/h and a maximum engine cylinder capacity of 50 cubic centimetres. Scooters exceeding either of these specifications are classed as motorcycles in all Australian jurisdictions. While an extensive body of knowledge exists on motorcycle safety, some of which is relevant to moped and scooter safety, the latter PTW types have received comparatively little focused research attention. Much of the research on moped safety to date has been conducted in Europe where they have been popular since the mid 20th century, while some studies have also been conducted in the United States. This research is of limited relevance to Australia due to socio-cultural, economic, regulatory and environmental differences. Moreover, while some studies have compared motorcycles to mopeds in terms of safety, no research to date has specifically examined the differences and similarities between mopeds and larger scooters, or between larger scooters and motorcycles. To address the need for a better understanding of moped and scooter use and safety, the current program of research involved three complementary studies designed to achieve the following aims: (1) develop better knowledge and understanding of moped and scooter usage trends and patterns; and (2) determine the factors leading to differences in moped, scooter and motorcycle safety. Study 1 involved six-monthly observations of PTW types in inner city parking areas of Queensland’s capital city, Brisbane, to monitor and quantify the types of PTW in use over a two year period. Study 2 involved an analysis of Queensland PTW crash and registration data, primarily comparing the police-reported crash involvement of mopeds, scooters and motorcycles over a five year period (N = 7,347). Study 3 employed both qualitative and quantitative methods to examine moped and scooter usage in two components: (a) four focus group discussions with Brisbane-based Queensland moped and scooter riders (N = 23); and (b) a state-wide survey of Queensland moped and scooter riders (N = 192). Study 1 found that of the PTW types parked in inner city Brisbane over the study period (N = 2,642), more than one third (36.1%) were mopeds or larger scooters. The number of PTWs observed increased at each six-monthly phase, but there were no significant changes in the proportions of PTW types observed across study phases. There were no significant differences in the proportions or numbers of PTW type observed by season. Study 2 revealed some important differences between mopeds, scooters and motorcycles in terms of safety and usage through analysis of crash and registration data. All Queensland PTW registrations doubled between 2001 and 2009, but there was an almost fifteen-fold increase in moped registrations. Mopeds subsequently increased as a proportion of Queensland registered PTWs from 1.2 percent to 8.8 percent over this nine year period. Moped and scooter crashes increased at a faster rate than motorcycle crashes over the five year study period from July 2003 to June 2008, reflecting their relatively greater increased usage. Crash rates per 10,000 registrations for the study period were only slightly higher for mopeds (133.4) than for motorcycles and scooters combined (124.8), but estimated crash rates per million vehicle kilometres travelled were higher for mopeds (6.3) than motorcycles and scooters (1.7). While the number of crashes increased for each PTW type over the study period, the rate of crashes per 10,000 registrations declined by 40 percent for mopeds compared with 22 percent for motorcycles and scooters combined. Moped and scooter crashes were generally less severe than motorcycle crashes and this was related to the particular crash characteristics of the PTW types rather than to the PTW types themselves. Compared to motorcycle and moped crashes, scooter crashes were less likely to be single vehicle crashes, to involve a speeding or impaired rider, to involve poor road conditions, or to be attributed to rider error. Scooter and moped crashes were more likely than motorcycle crashes to occur on weekdays, in lower speed zones and at intersections. Scooter riders were older on average (39) than moped (32) and motorcycle (35) riders, while moped riders were more likely to be female (36%) than scooter (22%) or motorcycle riders (7%). The licence characteristics of scooter and motorcycle riders were similar, with moped riders more likely to be licensed outside of Queensland and less likely to hold a full or open licence. The PTW type could not be identified in 15 percent of all cases, indicating a need for more complete recording of vehicle details in the registration data. The focus groups in Study 3a and the survey in Study 3b suggested that moped and scooter riders are a heterogeneous population in terms of demographic characteristics, riding experience, and knowledge and attitudes regarding safety and risk. The self-reported crash involvement of Study 3b respondents suggests that most moped and scooter crashes result in no injury or minor injury and are not reported to police. Study 3 provided some explanation for differences observed in Study 2 between mopeds and scooters in terms of crash involvement. On the whole, scooter riders were older, more experienced, more likely to have undertaken rider training and to value rider training programs. Scooter riders were also more likely to use protective clothing and to seek out safety-related information. This research has some important practical implications regarding moped and scooter use and safety. While mopeds and scooters are generally similar in terms of usage, and their usage has increased, scooter riders appear to be safer than moped riders due to some combination of superior skills and safer riding behaviour. It is reasonable to expect that mopeds and scooters will remain popular in Queensland in future and that their usage may further increase, along with that of motorcycles. Future policy and planning should consider potential options for encouraging moped riders to acquire better riding skills and greater safety awareness. While rider training and licensing appears an obvious potential countermeasure, the effectiveness of rider training has not been established and other options should also be strongly considered. Such options might include rider education and safety promotion, while interventions could also target other road users and urban infrastructure. Future research is warranted in regard to moped and scooter safety, particularly where the use of those PTWs has increased substantially from low levels. Research could address areas such as rider training and licensing (including program evaluations), the need for more detailed and reliable data (particularly crash and exposure data), protective clothing use, risks associated with lane splitting and filtering, and tourist use of mopeds. Some of this research would likely be relevant to motorcycle use and safety, as well as that of mopeds and scooters.
Resumo:
Academic libraries around the world often have to justify high maintenance costs. High maintenance costs of university libraries are often justified by the belief that regular use of an academic library improves the grades of students. However, this is a difficult statement to support, therefore demonstrating the link between library use and student outcomes is critical to ensuring that library investment continues. Questionnaires and interviews were conducted and the findings were analysed to derive users’ perceptions. The findings revealed interesting results regarding how users make use of the library and how users feel the library improves their personal performance. Overall, the perception of all three groups of the academic libraries within Kuwait is positive, however many users are dissatisfied with some academic library services. Students answered positively regarding their grades and use of the academic library. Academics and administrators were generally positive and offered an experienced insight into the quality of the library. This study offers the first perception based results in Kuwait. The inclusion of administrators’ perceptions is also novel in terms of the Gulf States. A refined model was designed based on the overall findings within the study. This model can be applied to any academic library, regardless of size or collection type. Based on findings, the researcher recommends taking the following points into consideration in order to improve library services and facilities for all users. Improvements could be made in the structure of library training courses and academic libraries should be providing flexible spaces for individuals and group study as well as social activities.
Resumo:
A sub optimal resource allocation algorithm for Orthogonal Frequency Division Multiplexing (OFDM) based cooperative scheme is proposed. The system consists of multiple relays. Subcarrier space is divided into blocks and relays participating in cooperation are allocated specific blocks to be used with a user. To ensure unique subcarrier assignment system is constrained such that same block cannot be used by more than one user. Users are given fair block assignments while no restriction for maximum number of blocks a relay can employ is given. Forced cost based decisions [1] are used for block allocation. Simulation results show that this scheme outperforms a non cooperating scheme with sequential allocation with respect to power usage.
Curbing resource consumption using team-based feedback : paper printing in a longitudinal case study
Resumo:
This paper details a team-based feedback approach for reducing resource consumption. The approach uses paper printing within office environments as a case study. It communicates the print usage of each participant’s team rather than the participant’s individual print usage. Feedback is provided weekly via emails and contains normative information, along with eco-metrics and team-based comparative statistics. The approach was empirically evaluated to study the effectiveness of the feedback method. The experiment comprised of 16 people belonging to 4 teams with data on their print usage gathered over 58 weeks, using the first 30-35 weeks as a baseline. The study showed a significant reduction in individual printing with an average of 28%. The experiment confirms the underlying hypothesis that participants are persuaded to reduce their print usage in order to improve the overall printing behaviour of their teams. The research provides clear pathways for future research to qualitatively investigate our findings.
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become increasingly dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level. Therefore, third-party developers have the opportunity to develop kernel-based low-level security tools which is not normal for smartphone platforms. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS for example, holding the greatest market share among all smartphone OSs, was closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners� privacy. In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities. We survey wellaccepted security mechanisms and tools which can increase device security. We provide descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage. As open smartphones are released and may increase their market share similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability of existing signature and intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment. Our third contribution involves initial results of our malware detection mechanism basing on static function call analysis. We identified approximately 105 Executable and Linking Format (ELF) executables installed to the Linux side of Android. We perform a statistical analysis on the function calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain function calls indicate malicious activity. Therefore, we present a simple decision tree for deciding the suspiciousness of the corresponding application. Our results present a first step towards detecting malicious applications on Android-based devices.
Resumo:
Private data stored on smartphones is a precious target for malware attacks. A constantly changing environment, e.g. switching network connections, can cause unpredictable threats, and require an adaptive approach to access control. Context-based access control is using dynamic environmental information, including it into access decisions. We propose an "ecosystem-in-an-ecosystem" which acts as a secure container for trusted software aiming at enterprise scenarios where users are allowed to use private devices. We have implemented a proof-of-concept prototype for an access control framework that processes changes to low-level sensors and semantically enriches them, adapting access control policies to the current context. This allows the user or the administrator to maintain fine-grained control over resource usage by compliant applications. Hence, resources local to the trusted container remain under control of the enterprise policy. Our results show that context-based access control can be done on smartphones without major performance impact.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Carrying capacity assessments model a population’s potential self-sufficiency. A crucial first step in the development of such modelling is to examine the basic resource-based parameters defining the population’s production and consumption habits. These parameters include basic human needs such as food, water, shelter and energy together with climatic, environmental and behavioural characteristics. Each of these parameters imparts land-usage requirements in different ways and varied degrees so their incorporation into carrying capacity modelling also differs. Given that the availability and values of production parameters may differ between locations, no two carrying capacity models are likely to be exactly alike. However, the essential parameters themselves can remain consistent so one example, the Carrying Capacity Dashboard, is offered as a case study to highlight one way in which these parameters are utilised. While examples exist of findings made from carrying capacity assessment modelling, to date, guidelines for replication of such studies in other regions and scales have largely been overlooked. This paper addresses such shortcomings by describing a process for the inclusion and calibration of the most important resource-based parameters in a way that could be repeated elsewhere.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.