601 resultados para resale
Resumo:
Overvoltage and overloading due to high utilization of PVs are the main power quality concerns for future distribution power systems. This paper proposes a distributed control coordination strategy to manage multiple PVs within a network to overcome these issues. PVs reactive power is used to deal with over-voltages and PVs active power curtailment are regulated to avoid overloading. The proposed control structure is used to share the required contribution fairly among PVs, in proportion to their ratings. This approach is examined on a practical distribution network with multiple PVs.
Resumo:
Understanding the dynamics of disease spread is of crucial importance, in contexts such as estimating load on medical services to risk assessment and intervention policies against large-scale epidemic outbreaks. However, most of the information is available after the spread itself, and preemptive assessment is far from trivial. Here, we investigate the use of agent-based simulations to model such outbreaks in a stylised urban environment. For most diseases, infection of a new individual may occur from casual contact in crowds as well as from repeated interactions with social partners such as work colleagues or family members. Our model therefore accounts for these two phenomena.Presented in this paper is the initial framework for such a model, detailing implementation of geographical features and generation of social structures. Preliminary results are a promising step towards large-scale simulations and evaluation of potential intervention policies.
Resumo:
Several algorithms and techniques widely used in Computer Science have been adapted from, or inspired by, known biological phenomena. This is a consequence of the multidisciplinary background of most early computer scientists. The field has now matured, and permits development of tools and collaborative frameworks which play a vital role in advancing current biomedical research. In this paper, we briefly present examples of the former, and elaborate upon two of the latter, applied to immunological modelling and as a new paradigm in gene expression.
Resumo:
In this paper, we investigate the effect of mobility constraints on epidemic broadcast mechanisms in DTNs (Delay-Tolerant Networks). Major factors affecting epidemic broadcast performances are its forwarding algorithm and node mobility. The impact of forwarding algorithm and node mobility on epidemic broadcast mechanisms has been actively studied in the literature, but those studies generally use unconstrained mobility models. The objective of this paper is therefore to quantitatively investigate the effect of mobility constraints on epidemic broadcast mechanisms. We evaluate the performances of three classes of epidemic broadcast mechanisms - P-BCAST (PUSH-based BroadCast), SA-BCAST (Self-Adaptive BroadCast), and HP-BCAST (History-based P-BCAST) - with a random waypoint mobility model with mobility constraints. Our finding includes that the existence of mobility constraints significantly improves the reach ability and dissemination speed of epidemic broadcast mechanisms while degrading their efficiency.
Resumo:
In some delay-tolerant communication systems such as vehicular ad-hoc networks, information flow can be represented as an infectious process, where each entity having already received the information will try to share it with its neighbours. The random walk and random waypoint models are popular analysis tools for these epidemic broadcasts, and represent two types of random mobility. In this paper, we introduce a simulation framework investigating the impact of a gradual increase of bias in path selection (i.e. reduction of randomness), when moving from the former to the latter. Randomness in path selection can significantly alter the system performances, in both regular and irregular network structures. The implications of these results for real systems are discussed in details.
Resumo:
One of the main challenges in data analytics is that discovering structures and patterns in complex datasets is a computer-intensive task. Recent advances in high-performance computing provide part of the solution. Multicore systems are now more affordable and more accessible. In this paper, we investigate how this can be used to develop more advanced methods for data analytics. We focus on two specific areas: model-driven analysis and data mining using optimisation techniques.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.
Resumo:
In providing simultaneous information on expression profiles for thousands of genes, microarray technologies have, in recent years, been largely used to investigate mechanisms of gene expression. Clustering and classification of such data can, indeed, highlight patterns and provide insight on biological processes. A common approach is to consider genes and samples of microarray datasets as nodes in a bipartite graphs, where edges are weighted e.g. based on the expression levels. In this paper, using a previously-evaluated weighting scheme, we focus on search algorithms and evaluate, in the context of biclustering, several variations of Genetic Algorithms. We also introduce a new heuristic “Propagate”, which consists in recursively evaluating neighbour solutions with one more or one less active conditions. The results obtained on three well-known datasets show that, for a given weighting scheme,optimal or near-optimal solutions can be identified.
Resumo:
This study aimed at presenting the intra-tester reliability of the static load bearing exercises (LBEs) performed by individuals with transfemoral amputation (TFA) fitted with an osseointegrated implant to stimulate the bone remodelling process. There is a need for a better understanding of the implementation of these exercises particularly the reliability. The intra-tester reliability is discussed with a particular emphasis on inter-load prescribed, inter-axis and inter-component reliabilities as well as the effect of body weight normalisation. Eleven unilateral TFAs fitted with an OPRA implant performed five trials in four loading conditions. The forces and moments on the three axes of the implant were measured directly with an instrumented pylon including a six-channel transducer. Reliability of loading variables was assessed using intraclass correlation coefficients (ICCs) and percentage standard error of measurement values (%SEMs). The ICCs of all variables were above 0.9 and the %SEM values ranged between 0 and 87%. This study showed a high between-participants’ variance highlighting the lack of loading consistency typical of symptomatic population as well as a high reliability between the loading sessions indicating a plausible correct repetition of the LBE by the participants. However, these outcomes must be understood within the framework of the proposed experimental protocol.
Resumo:
As a result of the more distributed nature of organisations and the inherently increasing complexity of their business processes, a significant effort is required for the specification and verification of those processes. The composition of the activities into a business process that accomplishes a specific organisational goal has primarily been a manual task. Automated planning is a branch of artificial intelligence (AI) in which activities are selected and organised by anticipating their expected outcomes with the aim of achieving some goal. As such, automated planning would seem to be a natural fit to the BPM domain to automate the specification of control flow. A number of attempts have been made to apply automated planning to the business process and service composition domain in different stages of the BPM lifecycle. However, a unified adoption of these techniques throughout the BPM lifecycle is missing. As such, we propose a new intention-centric BPM paradigm, which aims on minimising the specification effort by exploiting automated planning techniques to achieve a pre-stated goal. This paper provides a vision on the future possibilities of enhancing BPM using automated planning. A research agenda is presented, which provides an overview of the opportunities and challenges for the exploitation of automated planning in BPM.
Resumo:
Nth-Dimensional Truncated Polynomial Ring (NTRU) is a lattice-based public-key cryptosystem that offers encryption and digital signature solutions. It was designed by Silverman, Hoffstein and Pipher. The NTRU cryptosystem was patented by NTRU Cryptosystems Inc. (which was later acquired by Security Innovations) and available as IEEE 1363.1 and X9.98 standards. NTRU is resistant to attacks based on Quantum computing, to which the standard RSA and ECC public-key cryptosystems are vulnerable to. In addition, NTRU has higher performance advantages over these cryptosystems. Considering this importance of NTRU, it is highly recommended to adopt NTRU as part of a cipher suite along with widely used cryptosystems for internet security protocols and applications. In this paper, we present our analytical study on the implementation of NTRU encryption scheme which serves as a guideline for security practitioners who are novice to lattice-based cryptography or even cryptography. In particular, we show some non-trivial issues that should be considered towards a secure and efficient NTRU implementation.
Resumo:
Moderation of assessment constitutes a crucial element of the learning and teaching process at the university. Yet, despite its importance, many academics have confusing beliefs and attitudes towards moderation practices, processes and procedures. This paper reports on a qualitative study conducted in a Science, Technology, Engineering and Mathematics (STEM)-focused faculty at a large Australian higher education institution. The findings of the study revealed a strong need for further investigation on the ways moderation is understood and enacted by academics within a STEM-specific context and informed redevelopment of the faculty’s internal moderation policy.
Resumo:
Experimental studies have found that when the state-of-the-art probabilistic linear discriminant analysis (PLDA) speaker verification systems are trained using out-domain data, it significantly affects speaker verification performance due to the mismatch between development data and evaluation data. To overcome this problem we propose a novel unsupervised inter dataset variability (IDV) compensation approach to compensate the dataset mismatch. IDV-compensated PLDA system achieves over 10% relative improvement in EER values over out-domain PLDA system by effectively compensating the mismatch between in-domain and out-domain data.
Resumo:
This paper presents an efficient noniterative method for distribution state estimation using conditional multivariate complex Gaussian distribution (CMCGD). In the proposed method, the mean and standard deviation (SD) of the state variables is obtained in one step considering load uncertainties, measurement errors, and load correlations. In this method, first the bus voltages, branch currents, and injection currents are represented by MCGD using direct load flow and a linear transformation. Then, the mean and SD of bus voltages, or other states, are calculated using CMCGD and estimation of variance method. The mean and SD of pseudo measurements, as well as spatial correlations between pseudo measurements, are modeled based on the historical data for different levels of load duration curve. The proposed method can handle load uncertainties without using time-consuming approaches such as Monte Carlo. Simulation results of two case studies, six-bus, and a realistic 747-bus distribution network show the effectiveness of the proposed method in terms of speed, accuracy, and quality against the conventional approach.