910 resultados para key scheduling algorithm
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Key establishment is a crucial cryptographic primitive for building secure communication channels between two parties in a network. It has been studied extensively in theory and widely deployed in practice. In the research literature a typical protocol in the public-key setting aims for key secrecy and mutual authentication. However, there are many important practical scenarios where mutual authentication is undesirable, such as in anonymity networks like Tor, or is difficult to achieve due to insufficient public-key infrastructure at the user level, as is the case on the Internet today. In this work we are concerned with the scenario where two parties establish a private shared session key, but only one party authenticates to the other; in fact, the unauthenticated party may wish to have strong anonymity guarantees. We present a desirable set of security, authentication, and anonymity goals for this setting and develop a model which captures these properties. Our approach allows for clients to choose among different levels of authentication. We also describe an attack on a previous protocol of Øverlier and Syverson, and present a new, efficient key exchange protocol that provides one-way authentication and anonymity.
Resumo:
The increasing stock of aging office buildings will see a significant growth in retrofitting projects in Australian capital cities. Stakeholders of refitting works will also need to take on the sustainability challenge and realize tangible outcomes through project delivery. Traditionally, decision making for aged buildings, when facing the alternatives, is typically economically driven and on ad hoc basis. This leads to the tendency to either delay refitting for as long as possible thus causing building conditions to deteriorate, or simply demolish and rebuild with unjust financial burden. The technologies involved are often limited to typical strip-clean and repartition with dry walls and office cubicles. Changing business operational patterns, the efficiency of office space, and the demand on improved workplace environment, will need more innovative and intelligent approaches to refurbishing office buildings. For example, such projects may need to respond to political, social, environmental and financial implications. There is a need for the total consideration of buildings structural assessment, modeling of operating and maintenance costs, new architectural and engineering designs that maximise the utility of the existing structure and resulting productivity improvement, specific construction management procedures including procurement methods, work flow and scheduling and occupational health and safety. Recycling potential and conformance to codes may be other major issues. This paper introduces examples of Australian research projects which provided a more holistic approach to the decision making of refurbishing office space, using appropriate building technologies and products, assessment of residual service life, floor space optimisation and project procurement in order to bring about sustainable outcomes. The paper also discusses a specific case study on critical factors that influence key building components for these projects and issues for integrated decision support when dealing with the refurbishment, and indeed the “re-life”, of office buildings.
Resumo:
The lack of satisfactory consensus for characterizing the system intelligence and structured analytical decision models has inhibited the developers and practitioners to understand and configure optimum intelligent building systems in a fully informed manner. So far, little research has been conducted in this aspect. This research is designed to identify the key intelligent indicators, and develop analytical models for computing the system intelligence score of smart building system in the intelligent building. The integrated building management system (IBMS) was used as an illustrative example to present a framework. The models presented in this study applied the system intelligence theory, and the conceptual analytical framework. A total of 16 key intelligent indicators were first identified from a general survey. Then, two multi-criteria decision making (MCDM) approaches, the analytic hierarchy process (AHP) and analytic network process (ANP), were employed to develop the system intelligence analytical models. Top intelligence indicators of IBMS include: self-diagnostic of operation deviations; adaptive limiting control algorithm; and, year-round time schedule performance. The developed conceptual framework was then transformed to the practical model. The effectiveness of the practical model was evaluated by means of expert validation. The main contribution of this research is to promote understanding of the intelligent indicators, and to set the foundation for a systemic framework that provide developers and building stakeholders a consolidated inclusive tool for the system intelligence evaluation of the proposed components design configurations.
Resumo:
In maintaining quality of life, preventative health is an important area in which the performance of pro-social behaviours provides benefits to individuals who perform them as well as society. The establishment of the Preventative Health Taskforce in Australia demonstrates the significance of preventative health and aims to provide governments and health providers with evidence-based advice on preventative health issues (Preventative Health Taskforce, 2009). As preventative health behaviours are voluntary, for consumers to sustain this behaviour there needs to be a value proposition (Dann, 2008; Kotler and Lee, 2008). Customer value has been shown to influence repeat behaviour (McDougall and Levesque, 2000), word-of-mouth (Hartline and Jones, 1999), and attitudes (Dick and Basu, 2008). However to date there is little research that investigates the source of value for preventative health services. This qualitative study explores and identifies three categories of sources that influence four dimensions of value – functional, emotional, social and altruistic (Holbrook 2006). A conceptual model containing five propositions outlining these relationships is presented. This study provides evidence-based research that reveals sources of value that influence individuals’ decisions to perform pro-social behaviours in the long-term through their use of preventative health services. This research uses BreastScreen Queensland (BSQ), a cancer screening service, as the service context.
Resumo:
Purpose-- DB clients play a vital role in the delivery of DB system and the clients’ competences are critical to the success of DB projects. Most of DB clients, however, remain inexperienced with the DB system. This study, therefore, aims to identify the key competences that DB clients should possess to ensure the success of DB projects in the construction market of China. Design/Methodology/Approach -- Five semi-structured face-to-face interviews and two rounds Delphi questionnaire survey were conducted in the construction market of China to identify the key competences of DB clients. Rankings have been assigned to these key competences on the basis of their relative importance. Findings-- Six ranked key competences of DB clients have been identified, which are, namely, (1) the ability to clearly define project scope and objectives; (2) financial capacity for the projects; (3) capacity in contract management; (4) adequate staff or consulting team; (5) effective coordination with DB contractors and (6) experience with similar design-build projects. Calculation of Kendall’s Coefficient of Concordance (W) indicates a statistically significant consensus of panel experts on these top six key competences. Practical implications—Clients should clearly understand the competence requirements in DB projects and should assess their DB capability before going for the DB option. Originality/Value-- The examination of DB client’s key competences will help the client deepen the understanding of the DB system. DB clients can also make use of the research findings as guidelines to improve their DB competence.
Resumo:
Design-builders play a vital role in the success of DB projects. In the construction market of the People’s Republic of China, most of the design-builders, however, lack adequate competences to conduct the DB projects successfully. The objective of this study is, therefore, to identify the key competences that design-builders should possess to not only ensure the success of DB projects but also acquire the competitive advantages in the DB market. Five semi-structured face-to-face interviews and two rounds of Delphi questionnaire survey were conducted to identify the key competences of design-builders. Rankings have been assigned to these key competences on the basis of their relative importance. Six ranked key competences of design-builders have been identified, which are, namely, (1) experience with similar DB projects; (2) capability of corporate management; (3) combination of building techniques and design expertise; (4) financial capability for DB projects; (5) enterprise qualification and scale; and (6) credit records and reputation in the industry. The design-builders can make use of the research findings as guidelines to improve their DB competence. These research findings will also be useful to clients during the selection of design-builders.
Resumo:
The design-build system has been demonstrated as an effective delivery method and gained popularity worldwide. Although there are an increasing number of clients adopting DB method in China, most of them remain inexperienced with method. The objective of this study is therefore to identify the key competences that a client or its consultant should possess to ensure the success of DB projects. Face-to-face interviews and a two-round Delphi questionnaire survey were conducted to find the following six key competences of clients, which include the (1) ability to clearly articulate project scope and objectives; (2) financial capacity for DB projects; (3) capability in contract management; (4) adequate staff or consulting team; (5) effective coordination with contractors and (6) experience with similar DB projects. This study will hopefully provide clients with measures to evaluate their DB competence and further promote their understanding of DB system in the PRC.
Resumo:
This research deals with an innovative methodology for optimising the coal train scheduling problem. Based on our previously published work, generic solution techniques are developed by utilising a “toolbox” of standard well-solved standard scheduling problems. According to our analysis, the coal train scheduling problem can be basically modelled a Blocking Parallel-Machine Job-Shop Scheduling (BPMJSS) problem with some minor constraints. To construct the feasible train schedules, an innovative constructive algorithm called the SLEK algorithm is proposed. To optimise the train schedule, a three-stage hybrid algorithm called the SLEK-BIH-TS algorithm is developed based on the definition of a sophisticated neighbourhood structure under the mechanism of the Best-Insertion-Heuristic (BIH) algorithm and Tabu Search (TS) metaheuristic algorithm. A case study is performed for optimising a complex real-world coal rail system in Australia. A method to calculate the lower bound of the makespan is proposed to evaluate results. The results indicate that the proposed methodology is promising to find the optimal or near-optimal feasible train timetables of a coal rail system under network and terminal capacity constraints.
Resumo:
For the shop scheduling problems such as flow-shop, job-shop, open-shop, mixed-shop, and group-shop, most research focuses on optimizing the makespan under static conditions and does not take into consideration dynamic disturbances such as machine breakdown and new job arrivals. We regard the shop scheduling problem under static conditions as the static shop scheduling problem, while the shop scheduling problem with dynamic disturbances as the dynamic shop scheduling problem. In this paper, we analyze the characteristics of the dynamic shop scheduling problem when machine breakdown and new job arrivals occur, and present a framework to model the dynamic shop scheduling problem as a static group-shop-type scheduling problem. Using the proposed framework, we apply a metaheuristic proposed for solving the static shop scheduling problem to a number of dynamic shop scheduling benchmark problems. The results show that the metaheuristic methodology which has been successfully applied to the static shop scheduling problems can also be applied to solve the dynamic shop scheduling problem efficiently.
Resumo:
In this paper, three metaheuristics are proposed for solving a class of job shop, open shop, and mixed shop scheduling problems. We evaluate the performance of the proposed algorithms by means of a set of Lawrence’s benchmark instances for the job shop problem, a set of randomly generated instances for the open shop problem, and a combined job shop and open shop test data for the mixed shop problem. The computational results show that the proposed algorithms perform extremely well on all these three types of shop scheduling problems. The results also reveal that the mixed shop problem is relatively easier to solve than the job shop problem due to the fact that the scheduling procedure becomes more flexible by the inclusion of more open shop jobs in the mixed shop.
Resumo:
In this paper, we propose three meta-heuristic algorithms for the permutation flowshop (PFS) and the general flowshop (GFS) problems. Two different neighborhood structures are used for these two types of flowshop problem. For the PFS problem, an insertion neighborhood structure is used, while for the GFS problem, a critical-path neighborhood structure is adopted. To evaluate the performance of the proposed algorithms, two sets of problem instances are tested against the algorithms for both types of flowshop problems. The computational results show that the proposed meta-heuristic algorithms with insertion neighborhood for the PFS problem perform slightly better than the corresponding algorithms with critical-path neighborhood for the GFS problem. But in terms of computation time, the GFS algorithms are faster than the corresponding PFS algorithms.
Resumo:
Current research in secure messaging for Vehicular Ad hoc Networks (VANETs) appears to focus on employing a digital certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes a non-certificate-based public key management for VANETs. A comprehensive evaluation of performance and scalability of the proposed public key management regime is presented, which is compared to a certificate-based PKC by employing a number of quantified analyses and simulations. Not only does this paper demonstrate that the proposal can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC. It is believed that the proposed scheme will add a new dimension to the key management and verification services for VANETs.
Resumo:
In this paper, a hardware-based path planning architecture for unmanned aerial vehicle (UAV) adaptation is proposed. The architecture aims to provide UAVs with higher autonomy using an application specific evolutionary algorithm (EA) implemented entirely on a field programmable gate array (FPGA) chip. The physical attributes of an FPGA chip, being compact in size and low in power consumption, compliments it to be an ideal platform for UAV applications. The design, which is implemented entirely in hardware, consists of EA modules, population storage resources, and three-dimensional terrain information necessary to the path planning process, subject to constraints accounted for separately via UAV, environment and mission profiles. The architecture has been successfully synthesised for a target Xilinx Virtex-4 FPGA platform with 32% logic slices utilisation. Results obtained from case studies for a small UAV helicopter with environment derived from LIDAR (Light Detection and Ranging) data verify the effectiveness of the proposed FPGA-based path planner, and demonstrate convergence at rates above the typical 10 Hz update frequency of an autopilot system.