233 resultados para Network scale-up method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stochastic simulation algorithm was introduced by Gillespie and in a different form by Kurtz. There have been many attempts at accelerating the algorithm without deviating from the behavior of the simulated system. The crux of the explicit τ-leaping procedure is the use of Poisson random variables to approximate the number of occurrences of each type of reaction event during a carefully selected time period, τ. This method is acceptable providing the leap condition, that no propensity function changes “significantly” during any time-step, is met. Using this method there is a possibility that species numbers can, artificially, become negative. Several recent papers have demonstrated methods that avoid this situation. One such method classifies, as critical, those reactions in danger of sending species populations negative. At most, one of these critical reactions is allowed to occur in the next time-step. We argue that the criticality of a reactant species and its dependent reaction channels should be related to the probability of the species number becoming negative. This way only reactions that, if fired, produce a high probability of driving a reactant population negative are labeled critical. The number of firings of more reaction channels can be approximated using Poisson random variables thus speeding up the simulation while maintaining the accuracy. In implementing this revised method of criticality selection we make use of the probability distribution from which the random variable describing the change in species number is drawn. We give several numerical examples to demonstrate the effectiveness of our new method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new graph-theory and improved genetic algorithm based practical method is employed to solve the optimal sectionalizer switch placement problem. The proposed method determines the best locations of sectionalizer switching devices in distribution networks considering the effects of presence of distributed generation (DG) in fitness functions and other optimization constraints, providing the maximum number of costumers to be supplied by distributed generation sources in islanded distribution systems after possible faults. The proposed method is simulated and tested on several distribution test systems in both cases of with DG and non DG situations. The results of the simulations validate the proposed method for switch placement of the distribution network in the presence of distributed generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1990 the Dispute Resolution Centres Act, 1990 (Qld) (the Act) was passed by the Queensland Parliament. In the second reading speech for the Dispute Resolution Centres Bill on May 1990 the Hon Dean Wells stated that the proposed legislation would make mediation services available “in a non-coercive, voluntary forum where, with the help of trained mediators, the disputants will be assisted towards their own solutions to their disputes, thereby ensuring that the result is acceptable to the parties” (Hansard, 1990, 1718). It was recognised at that time that a method for resolving disputes was necessary for which “the conventional court system is not always equipped to provide lasting resolution” (Hansard, 1990, 1717). In particular, the lasting resolution of “disputes between people in continuing relationships” was seen as made possible through the new legislation; for example, “domestic disputes, disputes between employees, and neighbourhood disputes relating to such issues as overhanging tree branches, dividing fences, barking dogs, smoke, noise and other nuisances are occurring continually in the community” (Hansard, 1990, 1717). The key features of the proposed form of mediation in the Act were articulated as follows: “attendance of both parties at mediation sessions is voluntary; a party may withdraw at any time; mediation sessions will be conducted with as little formality and technicality as possible; the rules of evidence will not apply; any agreement reached is not enforceable in any court; although it could be made so if the parties chose to proceed that way; and the provisions of the Act do not affect any rights or remedies that a party to a dispute has apart from the Act” (Hansard, 1990, 1718). Since the introduction of the Act, the Alternative Dispute Resolution Branch of the Queensland Department of Justice and Attorney General has offered mediation services through, first the Community Justice Program (CJP), and then the Dispute Resolution Centres (DRCs) for a range of family, neighbourhood, workplace and community disputes. These services have mirrored those available through similar government agencies in other states such as the Community Justice Centres of NSW and the Victorian Dispute Resolution Centres. Since 1990, mediation has become one of the fastest growing forms of alternative dispute resolution (ADR). Sourdin has commented that "In addition to the growth in court-based and community-based dispute resolution schemes, ADR has been institutionalised and has grown within Australia and overseas” (2005, 14). In Australia, in particular, the development of ADR service provision “has been assisted by the creation and growth of professional organisations such as the Leading Edge Alternative Dispute Resolvers (LEADR), the Australian Commercial Dispute Centres (ACDC), Australian Disputes Resolution Association (ADRA), Conflict Resolution Network, and the Institute of Arbitrators and Mediators Australia (IAMA)” (Sourdin, 2005, 14). The increased emphasis on the use of ADR within education contexts (particularly secondary and tertiary contexts) has “also led to an increasing acceptance and understanding of (ADR) processes” (Sourdin, 2005, 14). Proponents of the mediation process, in particular, argue that much of its success derives from the inherent flexibility and creativity of the agreements reached through the mediation process and that it is a relatively low cost option in many cases (Menkel-Meadow, 1997, 417). It is also accepted that one of the main reasons for the success of mediation can be attributed to the high level of participation by the parties involved and thus creating a sense of ownership of, and commitment to, the terms of the agreement (Boulle, 2005, 65). These characteristics are associated with some of the core values of mediation, particularly as practised in community-based models as found at the DRCs. These core values include voluntary participation, party self-determination and party empowerment (Boulle, 2005, 65). For this reason mediation is argued as being an effective approach to resolving disputes, that creates a lasting resolution of the issues. Evaluation of the mediation process, particularly in the context of the growth of ADR, has been an important aspect of the development of the process (Sourdin, 2008). Writing in 2005 for example, Boulle, states that “although there is a constant refrain for more research into mediation practice, there has been a not insignificant amount of mediation measurement, both in Australia and overseas” (Boulle, 2005, 575). The positive claims of mediation have been supported to a significant degree by evaluations of the efficiency and effectiveness of the process. A common indicator of the effectiveness of mediation is the settlement rate achieved. High settlement rates for mediated disputes have been found for Australia (Altobelli, 2003) and internationally (Alexander, 2003). Boulle notes that mediation agreement rates claimed by service providers range from 55% to 92% (Boulle, 2005, 590). The annual reports for the Alternative Dispute Resolution Branch of the Queensland Department of Justice and Attorney-General considered prior to the commencement of this study indicated generally achievement of an approximate settlement figure of 86% by the Queensland Dispute Resolution Centres. More recently, the 2008-2009 annual report states that of the 2291 civil dispute mediated in 2007-2008, 86% reached an agreement. Further, of the 2693 civil disputes mediated in 2008-2009, 73% reached an agreement. These results are noted in the report as indicating “the effectiveness of mediation in resolving disputes” and as reflecting “the high level of agreement achieved for voluntary mediations” (Annual Report, 2008-2009, online). Whilst the settlement rates for the DRCs are strong, parties are rarely contacted for long term follow-up to assess whether agreements reached during mediation lasted to the satisfaction of each party. It has certainly been the case that the Dispute Resolution Centres of Queensland have not been resourced to conduct long-term follow-up assessments of mediation agreements. As Wade notes, "it is very difficult to compare "success" rates” and whilst “politicians want the comparison studies (they) usually do not want the delay and expense of accurate studies" (1998, 114). To date, therefore, it is fair to say that the efficiency of the mediation process has been evaluated but not necessarily its effectiveness. Rather, the practice at the Queensland DRCs has been to evaluate the quality of mediation service provision and of the practice of the mediation process. This has occurred, for example, through follow-up surveys of parties' satisfaction rates with the mediation service. In most other respects it is fair to say that the Centres have relied on the high settlement rates of the mediation process as a sign of the effectiveness of mediation (Annual Reports 1991 - 2010). Research of the mediation literature conducted for the purpose of this thesis has also indicated that there is little evaluative literature that provides an in-depth analysis and assessment of the longevity of mediated agreements. Instead evaluative studies of mediation tend to assess how mediation is conducted, or compare mediation with other conflict resolution options, or assess the agreement rate of mediations, including parties' levels of satisfaction with the service provision of the dispute resolution service provider (Boulle, 2005, Chapter 16).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Food is a vital foundation of all human life. It is essential to a myriad of political, socio-cultural, economic and environmental practices throughout history. However, those practices of food production, consumption, and distribution have the potential to now go through immensely transformative shifts as network technologies become increasingly embedded in every domain of contemporary life. Information and communication technologies (ICTs) are one of the key foundations of global functionality and sustenance today and undoubtedly will continue to present new challenges and opportunities for the future. As such, this Symposium will bring together leading scholars across disciplines to address challenges and opportunities at the intersection of food and ICTs in everyday urban environment. In particular, the discussion will revolve around the question: What are the key roles that network technologies play in re-shaping the food systems at micro- to macroscopic level? The symposium will contribute a unique perspective on urban food futures through the lens of network society paradigm where ICTs enable innovations in production, organisation, and communication within society. Some of the topics addressed will include encouraging transparency in food commodity chains; value of cultural understanding and communication in global food sustainability; and technologies to social inclusion; all of which evoke and examine the question surrounding networked individuals as changes catalysts for urban food futures. The event will provide an avenue for new discussions and speculations on key issues surrounding urban food futures in the network era, with a particular focus on bottom-up micro actions that challenge the existing food systems towards a broader sociocultural, political, technological, and environmental transformations. One central area of concern is that current systems of food production, distribution, and consumption do not ensure food security for the future, but rather seriously threaten it. With the recent unprecedented scale of urban growth and rise of middle-class, the problem continues to intensify. This situation requires extensive distribution networks to feed urban residents, and therefore poses significant infrastructural challenges to both the public and private sectors. The symposium will also address the transferability of citizen empowerment that network technologies enable as demonstrated in various significant global political transformations from the bottom-up, such as the recent Egyptian Youth Revolution. Another key theme of the discussion will be the role of ICTs (and the practices that they mediate) in fostering transparency in commodity chains. The symposium will ask what differences these technologies can make on the practices of food consumption and production. After discussions, we will initiate an international network of food-thinkers and actors that will function as a platform for knowledge sharing and collaborations. The participants will be invited to engage in planning for the on-going future development of the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new relationship type of social networks - online dating - are gaining popularity. With a large member base, users of a dating network are overloaded with choices about their ideal partners. Recommendation methods can be utilized to overcome this problem. However, traditional recommendation methods do not work effectively for online dating networks where the dataset is sparse and large, and a two-way matching is required. This paper applies social networking concepts to solve the problem of developing a recommendation method for online dating networks. We propose a method by using clustering, SimRank and adapted SimRank algorithms to recommend matching candidates. Empirical results show that the proposed method can achieve nearly double the performance of the traditional collaborative filtering and common neighbor methods of recommendation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that ‘grounding’ of modeled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology), and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Teacher professional development provided by education advisors as one-off, centrally offered sessions does not always result in change in teacher knowledge, beliefs, attitudes or practice in the classroom. As the mathematics education advisor in this study, I set out to investigate a particular method of professional development so as to influence change in a practising classroom teacher’s knowledge and practices. The particular method of professional development utilised in this study was based on several principles of effective teacher professional development and saw me working regularly in a classroom with the classroom teacher as well as providing ongoing support for her for a full school year. The intention was to document the effects of this particular method of professional development in terms of the classroom teacher’s and my professional growth to provide insights for others working as education advisors. The professional development for the classroom teacher consisted of two components. The first was the co-operative development and implementation of a mental computation instructional program for the Year 3 class. The second component was the provision of ongoing support for the classroom teacher by the education advisor. The design of the professional development and the mental computation instructional program were progressively refined throughout the year. The education advisor fulfilled multiple roles in the study as teacher in the classroom, teacher educator working with the classroom teacher and researcher. Examples of the professional growth of the classroom teacher and the education advisor which occurred as sequences of changes (growth networks, Hollingsworth, 1999) in the domains of the professional world of the classroom teacher and education advisor were drawn from the large body of data collected through regular face-to-face and email communications between the classroom teacher and the education advisor as well as from transcripts of a structured interview. The Interconnected Model of Professional Growth (Clarke & Hollingsworth, 2002; Hollingsworth, 1999) was used to summarise and represent each example of the classroom teacher’s professional growth. A modified version of this model was used to summarise and represent the professional growth of the education advisor. This study confirmed that the method of professional development utilised could lead to significant teacher professional growth related directly to her work in the classroom. Using the Interconnected Model of Professional Growth to summarise and represent the classroom teacher’s professional growth and the modified version for my professional growth assisted with the recognition of examples of how we both changed. This model has potential to be used more widely by education advisors when preparing, implementing, evaluating and following-up on planned teacher professional development activities. The mental computation instructional program developed and trialled in the study was shown to be a successful way of sequencing and managing the teaching of mental computation strategies and related number sense understandings to Year 3 students. This study was conducted in one classroom, with one teacher in one school. The strength of this study was the depth of teacher support provided made possible by the particular method of the professional development, and the depth of analysis of the process. In another school, or with another teacher, this might not have been as successful. While I set out to change my practice as an education advisor I did not expect the depth of learning I experienced in terms of my knowledge, beliefs, attitudes and practices as an educator of teachers. This study has changed the way in which I plan to work as an education advisor in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents the outcomes of a comprehensive research study undertaken to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The knowledge created is expected to contribute to a greater understanding of urban stormwater quality and thereby enhance the design of stormwater quality treatment systems. The research study was undertaken based on selected urban catchments in Gold Coast, Australia. The research methodology included field investigations, laboratory testing, computer modelling and data analysis. Both univariate and multivariate data analysis techniques were used to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The rainfall characteristics investigated included average rainfall intensity and rainfall duration whilst catchment characteristics included land use, impervious area percentage, urban form and pervious area location. The catchment scale data for the analysis was obtained from four residential catchments, including rainfall-runoff records, drainage network data, stormwater quality data and land use and land cover data. Pollutants build-up samples were collected from twelve road surfaces in residential, commercial and industrial land use areas. The relationships between rainfall characteristics, catchment characteristics and urban stormwater quality were investigated based on residential catchments and then extended to other land uses. Based on the influence rainfall characteristics exert on urban stormwater quality, rainfall events can be classified into three different types, namely, high average intensity-short duration (Type 1), high average intensity-long duration (Type 2) and low average intensity-long duration (Type 3). This provides an innovative approach to conventional modelling which does not commonly relate stormwater quality to rainfall characteristics. Additionally, it was found that the threshold intensity for pollutant wash-off from urban catchments is much less than for rural catchments. High average intensity-short duration rainfall events are cumulatively responsible for the generation of a major fraction of the annual pollutants load compared to the other rainfall event types. Additionally, rainfall events less than 1 year ARI such as 6- month ARI should be considered for treatment design as they generate a significant fraction of the annual runoff volume and by implication a significant fraction of the pollutants load. This implies that stormwater treatment designs based on larger rainfall events would not be feasible in the context of cost-effectiveness, efficiency in treatment performance and possible savings in land area needed. This also suggests that the simulation of long-term continuous rainfall events for stormwater treatment design may not be needed and that event based simulations would be adequate. The investigations into the relationship between catchment characteristics and urban stormwater quality found that other than conventional catchment characteristics such as land use and impervious area percentage, other catchment characteristics such as urban form and pervious area location also play important roles in influencing urban stormwater quality. These outcomes point to the fact that the conventional modelling approach in the design of stormwater quality treatment systems which is commonly based on land use and impervious area percentage would be inadequate. It was also noted that the small uniformly urbanised areas within a larger mixed catchment produce relatively lower variations in stormwater quality and as expected lower runoff volume with the opposite being the case for large mixed use urbanised catchments. Therefore, a decentralised approach to water quality treatment would be more effective rather than an "end-of-pipe" approach. The investigation of pollutants build-up on different land uses showed that pollutant build-up characteristics vary even within the same land use. Therefore, the conventional approach in stormwater quality modelling, which is based solely on land use, may prove to be inappropriate. Industrial land use has relatively higher variability in maximum pollutant build-up, build-up rate and particle size distribution than the other two land uses. However, commercial and residential land uses had relatively higher variations of nutrients and organic carbon build-up. Additionally, it was found that particle size distribution had a relatively higher variability for all three land uses compared to the other build-up parameters. The high variability in particle size distribution for all land uses illustrate the dissimilarities associated with the fine and coarse particle size fractions even within the same land use and hence the variations in stormwater quality in relation to pollutants adsorbing to different sizes of particles.