862 resultados para 080201 Analysis of Algorithms and Complexity
Resumo:
This article provides an overview of a research project investigating contemporary literary representations of Melbourne’s inner and outer suburban spaces. It will argue that the city represented by local writers is an often more complex way of envisioning the city than the one presented in public policy and cultural discourses. In this view, the writer’s vision of a city does not necessarily override or provide a “truer’ account but it is in the fictional city where the complexity of the everyday life of a city is most accurately portrayed. The article will also provide an overview of the theoretical framework for reading the fictional texts in this way, examining how Soja’s concept of Thirdspace (2006) provides a place to engage “critically with theoretical issues, while simultaneously being that space where the debate occurs” (Mole 2008: 3).
Resumo:
The literature abounds with descriptions of failures in high-profile projects and a range of initiatives has been generated to enhance project management practice (e.g., Morris, 2006). Estimating from our own research, there are scores of other project failures that are unrecorded. Many of these failures can be explained using existing project management theory; poor risk management, inaccurate estimating, cultures of optimism dominating decision making, stakeholder mismanagement, inadequate timeframes, and so on. Nevertheless, in spite of extensive discussion and analysis of failures and attention to the presumed causes of failure, projects continue to fail in unexpected ways. In the 1990s, three U.S. state departments of motor vehicles (DMV) cancelled major projects due to time and cost overruns and inability to meet project goals (IT-Cortex, 2010). The California DMV failed to revitalize their drivers’ license and registration application process after spending $45 million. The Oregon DMV cancelled their five year, $50 million project to automate their manual, paper-based operation after three years when the estimates grew to $123 million; its duration stretched to eight years or more and the prototype was a complete failure. In 1997, the Washington state DMV cancelled their license application mitigation project because it would have been too big and obsolete by the time it was estimated to be finished. There are countless similar examples of projects that have been abandoned or that have not delivered the requirements.
Resumo:
Barmah Forest virus (BFV) disease is one of the most widespread mosquito-borne diseases in Australia. The number of outbreaks and the incidence rate of BFV in Australia have attracted growing concerns about the spatio-temporal complexity and underlying risk factors of BFV disease. A large number of notifications has been recorded continuously in Queensland since 1992. Yet, little is known about the spatial and temporal characteristics of the disease. I aim to use notification data to better understand the effects of climatic, demographic, socio-economic and ecological risk factors on the spatial epidemiology of BFV disease transmission, develop predictive risk models and forecast future disease risks under climate change scenarios. Computerised data files of daily notifications of BFV disease and climatic variables in Queensland during 1992-2008 were obtained from Queensland Health and Australian Bureau of Meteorology, respectively. Projections on climate data for years 2025, 2050 and 2100 were obtained from Council of Scientific Industrial Research Organisation. Data on socio-economic, demographic and ecological factors were also obtained from relevant government departments as follows: 1) socio-economic and demographic data from Australian Bureau of Statistics; 2) wetlands data from Department of Environment and Resource Management and 3) tidal readings from Queensland Department of Transport and Main roads. Disease notifications were geocoded and spatial and temporal patterns of disease were investigated using geostatistics. Visualisation of BFV disease incidence rates through mapping reveals the presence of substantial spatio-temporal variation at statistical local areas (SLA) over time. Results reveal high incidence rates of BFV disease along coastal areas compared to the whole area of Queensland. A Mantel-Haenszel Chi-square analysis for trend reveals a statistically significant relationship between BFV disease incidence rates and age groups (ƒÓ2 = 7587, p<0.01). Semi-variogram analysis and smoothed maps created from interpolation techniques indicate that the pattern of spatial autocorrelation was not homogeneous across the state. A cluster analysis was used to detect the hot spots/clusters of BFV disease at a SLA level. Most likely spatial and space-time clusters are detected at the same locations across coastal Queensland (p<0.05). The study demonstrates heterogeneity of disease risk at a SLA level and reveals the spatial and temporal clustering of BFV disease in Queensland. Discriminant analysis was employed to establish a link between wetland classes, climate zones and BFV disease. This is because the importance of wetlands in the transmission of BFV disease remains unclear. The multivariable discriminant modelling analyses demonstrate that wetland types of saline 1, riverine and saline tidal influence were the most significant risk factors for BFV disease in all climate and buffer zones, while lacustrine, palustrine, estuarine and saline 2 and saline 3 wetlands were less important. The model accuracies were 76%, 98% and 100% for BFV risk in subtropical, tropical and temperate climate zones, respectively. This study demonstrates that BFV disease risk varied with wetland class and climate zone. The study suggests that wetlands may act as potential breeding habitats for BFV vectors. Multivariable spatial regression models were applied to assess the impact of spatial climatic, socio-economic and tidal factors on the BFV disease in Queensland. Spatial regression models were developed to account for spatial effects. Spatial regression models generated superior estimates over a traditional regression model. In the spatial regression models, BFV disease incidence shows an inverse relationship with minimum temperature, low tide and distance to coast, and positive relationship with rainfall in coastal areas whereas in whole Queensland the disease shows an inverse relationship with minimum temperature and high tide and positive relationship with rainfall. This study determines the most significant spatial risk factors for BFV disease across Queensland. Empirical models were developed to forecast the future risk of BFV disease outbreaks in coastal Queensland using existing climatic, socio-economic and tidal conditions under climate change scenarios. Logistic regression models were developed using BFV disease outbreak data for the existing period (2000-2008). The most parsimonious model had high sensitivity, specificity and accuracy and this model was used to estimate and forecast BFV disease outbreaks for years 2025, 2050 and 2100 under climate change scenarios for Australia. Important contributions arising from this research are that: (i) it is innovative to identify high-risk coastal areas by creating buffers based on grid-centroid and the use of fine-grained spatial units, i.e., mesh blocks; (ii) a spatial regression method was used to account for spatial dependence and heterogeneity of data in the study area; (iii) it determined a range of potential spatial risk factors for BFV disease; and (iv) it predicted the future risk of BFV disease outbreaks under climate change scenarios in Queensland, Australia. In conclusion, the thesis demonstrates that the distribution of BFV disease exhibits a distinct spatial and temporal variation. Such variation is influenced by a range of spatial risk factors including climatic, demographic, socio-economic, ecological and tidal variables. The thesis demonstrates that spatial regression method can be applied to better understand the transmission dynamics of BFV disease and its risk factors. The research findings show that disease notification data can be integrated with multi-factorial risk factor data to develop build-up models and forecast future potential disease risks under climate change scenarios. This thesis may have implications in BFV disease control and prevention programs in Queensland.
Resumo:
The aim of this paper is to provide a comparison of various algorithms and parameters to build reduced semantic spaces. The effect of dimension reduction, the stability of the representation and the effect of word order are examined in the context of the five algorithms bearing on semantic vectors: Random projection (RP), singular value decom- position (SVD), non-negative matrix factorization (NMF), permutations and holographic reduced representations (HRR). The quality of semantic representation was tested by means of synonym finding task using the TOEFL test on the TASA corpus. Dimension reduction was found to improve the quality of semantic representation but it is hard to find the optimal parameter settings. Even though dimension reduction by RP was found to be more generally applicable than SVD, the semantic vectors produced by RP are somewhat unstable. The effect of encoding word order into the semantic vector representation via HRR did not lead to any increase in scores over vectors constructed from word co-occurrence in context information. In this regard, very small context windows resulted in better semantic vectors for the TOEFL test.
Resumo:
Chatrooms, for example Internet Relay Chat, are generally multi-user, multi-channel and multiserver chat-systems which run over the Internet and provide a protocol for real-time text-based conferencing between users all over the world. While a well-trained human observer is able to understand who is chatting with whom, there are no efficient and accurate automated tools to determine the groups of users conversing with each other. A precursor to analysing evolving cyber-social phenomena is to first determine what the conversations are and which groups of chatters are involved in each conversation. We consider this problem in this paper. We propose an algorithm to discover all groups of users that are engaged in conversation. Our algorithms are based on a statistical model of a chatroom that is founded on our experience with real chatrooms. Our approach does not require any semantic analysis of the conversations, rather it is based purely on the statistical information contained in the sequence of posts. We improve the accuracy by applying some graph algorithms to clean the statistical information. We present some experimental results which indicate that one can automatically determine the conversing groups in a chatroom, purely on the basis of statistical analysis.
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.