921 resultados para Dimension fractale
Resumo:
This special issue presents an excellent opportunity to study applied epistemology in public policy. This is an important task because the arena of public policy is the social domain in which macro conditions for ‘knowledge work’ and ‘knowledge industries’ are defined and created. We argue that knowledge-related public policy has become overly concerned with creating the politico-economic parameters for the commodification of knowledge. Our policy scope is broader than that of Fuller (1988), who emphasizes the need for a social epistemology of science policy. We extend our focus to a range of policy documents that include communications, science, education and innovation policy (collectively called knowledge-related public policy in acknowledgement of the fact that there is no defined policy silo called ‘knowledge policy’), all of which are central to policy concerned with the ‘knowledge economy’ (Rooney and Mandeville, 1998). However, what we will show here is that, as Fuller (1995) argues, ‘knowledge societies’ are not industrial societies permeated by knowledge, but that knowledge societies are permeated by industrial values. Our analysis is informed by an autopoietic perspective. Methodologically, we approach it from a sociolinguistic position that acknowledges the centrality of language to human societies (Graham, 2000). Here, what we call ‘knowledge’ is posited as a social and cognitive relationship between persons operating on and within multiple social and non-social (or, crudely, ‘physical’) environments. Moreover, knowing, we argue, is a sociolinguistically constituted process. Further, we emphasize that the evaluative dimension of language is most salient for analysing contemporary policy discourses about the commercialization of epistemology (Graham, in press). Finally, we provide a discourse analysis of a sample of exemplary texts drawn from a 1.3 million-word corpus of knowledge-related public policy documents that we compiled from local, state, national and supranational legislatures throughout the industrialized world. Our analysis exemplifies a propensity in policy for resorting to technocratic, instrumentalist and anti-intellectual views of knowledge in policy. We argue that what underpins these patterns is a commodity-based conceptualization of knowledge, which is underpinned by an axiology of narrowly economic imperatives at odds with the very nature of knowledge. The commodity view of knowledge, therefore, is flawed in its ignorance of the social systemic properties of knowing’.
Resumo:
The major purpose of Vehicular Ad Hoc Networks (VANETs) is to provide safety-related message access for motorists to react or make a life-critical decision for road safety enhancement. Accessing safety-related information through the use of VANET communications, therefore, must be protected, as motorists may make critical decisions in response to emergency situations in VANETs. If introducing security services into VANETs causes considerable transmission latency or processing delays, this would defeat the purpose of using VANETs to improve road safety. Current research in secure messaging for VANETs appears to focus on employing certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes an efficient public key management system for VANETs: the Public Key Registry (PKR) system. Not only does this paper demonstrate that the proposed PKR system can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC scheme. It is believed that the proposed PKR system will create a new dimension to the key management and verification services for VANETs.
Designing for engagement towards healthier lifestyles through food image sharing : the case of I8DAT
Resumo:
This paper introduces the underlying design concepts of I8DAT, a food image sharing application that has been developed as part of a three-year research project – Eat, Cook, Grow: Ubiquitous Technology for Sustainable Food Culture in the City (http://www.urbaninformatics .net/projects/food) – exploring urban food practices to engage people in healthier, more environmentally and socially sustainable eating, cooking, and growing food in their everyday lives. The key aim of the project is to produce actionable knowledge, which is then applied to create and test several accessible, user-centred interactive design solutions that motivate user-engagement through playful and social means rather than authoritative information distribution. Through the design and implementation processes we envisage to integrate these design interventions to create a sustainable food network that is both technical and socio-cultural in nature (technosocial). Our primary research locale is Brisbane, Australia, with additional work carried out in three reference cities with divergent geographic, socio-cultural, and technological backgrounds: Seoul, South Korea, for its global leadership in ubiquitous technology, broadband access, and high population density; Lincoln, UK, for the regional and peri-urban dimension it provides, and Portland, Oregon, US, for its international standing as a hub of the sustainable food movement.
Resumo:
In this chapter I look at some issues around the transfer of cultural industry policy between two very different national contexts, the UK and Russia. Specifically it draws on a partnership project between Manchester and St. Petersburg financed by the European Union as part of a program to promote economic development through knowledge transfer between Europe and the countries of the former Soviet Union. This specific project attempted to place the cultural industries squarely within the dimension of economic development, and drew on the expertise of Manchester’s Creative Industries Development Service and other partners to effect this policy transfer
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
We demonstrate a modification of the algorithm of Dani et al for the online linear optimization problem in the bandit setting, which allows us to achieve an O( \sqrt{T ln T} ) regret bound in high probability against an adaptive adversary, as opposed to the in expectation result against an oblivious adversary of Dani et al. We obtain the same dependence on the dimension as that exhibited by Dani et al. The results of this paper rest firmly on those of Dani et al and the remarkable technique of Auer et al for obtaining high-probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.
Resumo:
We present a modification of the algorithm of Dani et al. [8] for the online linear optimization problem in the bandit setting, which with high probability has regret at most O ∗ ( √ T) against an adaptive adversary. This improves on the previous algorithm [8] whose regret is bounded in expectation against an oblivious adversary. We obtain the same dependence on the dimension (n 3/2) as that exhibited by Dani et al. The results of this paper rest firmly on those of [8] and the remarkable technique of Auer et al. [2] for obtaining high probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.
Resumo:
After state-wide flooding and a category-5 tropical cyclone, three-quarters of the state of Queensland was declared a disaster zone in early 2011. This deluge of adversity had a significant impact on university students, a few weeks prior to the start of the academic semester. The purpose of this paper is to examine the role that design plays in facilitating students to understand and respond to, adversity. The participants of this study were second and fourth year architectural design students at a large Australian University, in Queensland. As a part of their core architectural design studies, students were required to provide architectural responses to the recent catastrophic events in Queensland. Qualitative data was obtained through student surveys, work design work submitted by students and a survey of guests who attending an exhibition of the student work. The results of this research showed that the students produced more than just the required set of architectural drawings, process journals and models, but also recognition of the important role that the affective dimension of the flooding event and the design process played in helping them to both understand and respond to, adversity. They held the ‘real world’ experience and practical aspect of the assessment in higher regard than their typical focus on aesthetics and the making of iconic design. Perhaps most importantly, the students recognised that this process allowed them to have a voice, and a means to respond to adversity through the powerful language of design.
Resumo:
It was reported that the manuscript of Crash was returned to the publisher with a note reading ‘The author is beyond psychiatric help’. Ballard took the lay diagnosis as proof of complete artistic success. Crash conflates the Freudian tropes of libido and thanatos, overlaying these onto the twentieth century erotic icon, the car. Beyond mere incompetent adolescent copulatory fumblings in the back seat of the parental sedan or the clichéd phallic locomotor of the mid-life Ferrari, Ballard engages the full potentialities of the automobile as the locus and sine qua non of a perverse, though functional erotic. ‘Autoeroticism’ is transformed into automotive, traumatic or surgical paraphilia, driving Helmut Newton’s insipid photo-essays of BDSM and orthopædics into an entirely new dimension, dancing precisely where (but more crucially, because) the ‘body is bruised to pleasure soul’. The serendipity of quotidian accidental collisions is supplanted, in pursuit of the fetishised object, by contrived (though not simulated) recreations of iconographic celebrity deaths. Penetration remains as a guiding trope of sexuality, but it is confounded by a perversity of focus. Such an obsessive pursuit of this autoerotic-as-reality necessitates the rejection of the law of human sexual regulation, requiring the re-interpretation of what constitutes sex itself by looking beyond or through conventional sexuality into Ballard’s paraphiliac and nightmarish consensual Other. This Other allows for (if not demands) the tangled wreckage of a sportscar to function as a transformative sexual agent, creating, of woman, a being of ‘free and perverse sexuality, releasing within its dying chromium and leaking engine-parts, all the deviant possibilities of her sex’.
Resumo:
This paper reports a longitudinal analysis of 20 necessity driven micro-entrepreneurs operating in Beira, Central Mozambique, who received funding and training from the same NGO to establish or grow their business activities and reports the development of these entrepreneurs in terms of their acquired entrepreneurial potential for long-term success. The results indicate there is a process of entrepreneurial becoming that is not just about access to finance but especially learning and, when successful, this process supports the transformation of survival micro-enterprises into entrepreneurial micro-businesses. The concept of ‘becoming’ contains an implicit temporal dimension. Becoming suggests a transformation over time: a change from what one is already. In this study, we witness a significant change in understanding how a business needs to operate, in recognizing opportunities, thinking more creatively, and building self-confidence.
Resumo:
The primary goal of the Vehicular Ad Hoc Network (VANET) is to provide real-time safety-related messages to motorists to enhance road safety. Accessing and disseminating safety-related information through the use of wireless communications technology in VANETs should be secured, as motorists may make critical decisions in dealing with an emergency situation based on the received information. If security concerns are not addressed in developing VANET systems, an adversary can tamper with, or suppress, the unprotected message to mislead motorists to cause traffic accidents and hazards. Current research on secure messaging in VANETs focuses on employing the certificate-based Public Key Infrastructure (PKI) scheme to support message encryption and digital signing. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This thesis has proposed a novel public key verification and management approach for VANETs; namely, the Public Key Registry (PKR) regime. Compared to the VANET PKI scheme, this new approach can satisfy necessary security requirements with improved performance and scalability, and at a lower cost by reducing the security overheads of message transmission and eliminating digital certificate deployment and maintenance issues. The proposed PKR regime consists of the required infrastructure components, rules for public key management and verification, and a set of interactions and associated behaviours to meet these rule requirements. This is achieved through a system design as a logic process model with functional specifications. The PKR regime can be used as development guidelines for conforming implementations. An analysis and evaluation of the proposed PKR regime includes security features assessment, analysis of the security overhead of message transmission, transmission latency, processing latency, and scalability of the proposed PKR regime. Compared to certificate-based PKI approaches, the proposed PKR regime can maintain the necessary security requirements, significantly reduce the security overhead by approximately 70%, and improve the performance by 98%. Meanwhile, the result of the scalability evaluation shows that the latency of employing the proposed PKR regime stays much lower at approximately 15 milliseconds, whether operating in a huge or small environment. It is therefore believed that this research will create a new dimension to the provision of secure messaging services in VANETs.