493 resultados para Smale’s Conjecture
Resumo:
In Australia, the extent of a mortgagee’s duty when exercising power of sale has long been the subject of conjecture. With the advent of the global financial crisis in the latter part of 2008, there has been some concern to ensure that the interests of mortgagors are adequately protected. In Queensland, concern of this type resulted in the enactment of the Property Law (Mortgagor Protection) Amendment Act 2008 (Qld). This amending legislation operates to both extend and strengthen the operation of s 85 of the Property Law Act 1974 (Qld) which regulates the mortgagee’s power of sale in Queensland. This article examines the impact of this amending legislation which was hastily introduced and passed by the Queensland Parliament without consultation and which introduces a level of prescription in relation to a sale under a prescribed mortgage which is without precedent elsewhere in Australia.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d = VC(F) bound on the graph density of a subgraph of the hypercube—oneinclusion graph. The first main result of this paper is a density bound of n [n−1 <=d-1]/[n <=d] < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d contractible simplicial complexes, extending the well-known characterization that d = 1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VCdimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(logn) and is shown to be optimal up to an O(logk) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout.
Resumo:
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube—one-inclusion graph. The first main result of this report is a density bound of n∙choose(n-1,≤d-1)/choose(n,≤d) < d, which positively resolves a conjecture of Kuzmin and Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved one-inclusion mistake bound. The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is an algebraic topological property of maximum classes of VC-dimension d as being d-contractible simplicial complexes, extending the well-known characterization that d=1 maximum classes are trees. We negatively resolve a minimum degree conjecture of Kuzmin and Warmuth—the second part to a conjectured proof of correctness for Peeling—that every class has one-inclusion minimum degree at most its VC-dimension. Our final main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This result improves on known PAC-based expected risk bounds by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout
Resumo:
The Early Years Generalising Project involves Australian students, Years 1-4 (age 5-9), and explores how the students grasp and express generalisations. This paper focuses on the data collected from clinical interviews with Year 3 and 4 cohorts in an investigative study focusing on the identifications, prediction and justification of function rules. It reports on students' attempts to generalise from function machine contexts, describing the various ways students express generalisation and highlighting the different levels of justification given by students. Finally, we conjecture that there are a set of stages in the expression and justification of generalisations that assist students to reach generality within tasks.
Resumo:
The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments. In this study we propose that synaptic learning mechanisms, in conjunction with appropriate movement strategies based on warm-up movements, can calibrate the HD system so that it functions effectively even in darkness. To examine the link between physical embodiment and neural control, and to determine that the system is robust to real-world phenomena, we implemented the synaptic mechanisms in a spiking neural network and tested it on a mobile robot platform. Results show that the combination of the synaptic learning mechanisms and warm-up movements are able to reliably calibrate the HD system so that it accurately tracks real-world head direction, and that calibration breaks down in systematic ways if certain movements are omitted. This work confirms that targeted, embodied behaviour can be used to calibrate neural systems, demonstrates that ‘grounding’ of modeled biological processes in the real world can reveal underlying functional principles (supporting the importance of robotics to biology), and proposes a functional role for stereotypical behaviours seen in infant mammals and those animals with certain motor deficits. We conjecture that these calibration principles may extend to the calibration of other neural systems involved in motion tracking and the representation of space, such as grid cells in entorhinal cortex.
Resumo:
Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's "cognitive map", or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour. Here we demonstrate that key neural correlates of spatial navigation in darkness cannot be explained if the path integration system acted independently of boundary (landmark) information. In vivo recordings demonstrate that the rodent head direction (HD) system becomes unstable within three minutes without vision. In contrast, rodents maintain stable place fields and grid fields for over half an hour without vision. Using a simple HD error model, we show analytically that idiothetic path integration (iPI) alone cannot be used to maintain any stable place representation beyond two to three minutes. We then use a measure of place stability based on information theoretic principles to prove that featureless boundaries alone cannot be used to improve localization above chance level. Having shown that neither iPI nor boundaries alone are sufficient, we then address the question of whether their combination is sufficient and - we conjecture - necessary to maintain place stability for prolonged periods without vision. We addressed this question in simulations and robot experiments using a navigation model comprising of a particle filter and boundary map. The model replicates published experimental results on place field and grid field stability without vision, and makes testable predictions including place field splitting and grid field rescaling if the true arena geometry differs from the acquired boundary map. We discuss our findings in light of current theories of animal navigation and neuronal computation, and elaborate on their implications and significance for the design, analysis and interpretation of experiments.
Resumo:
In this paper we present substantial evidence for the existence of a bias in the distribution of births of leading US politicians in favor of those that have been the oldest in their cohort at school. This “relative age effect” has been proven to influence performance at school and in sports,but evidence on its impact on people’s vocational success has been rare. We find a marked break in the density of birthdate of politicians using a maximum likelihood test and McCrary’s (2008) nonparametric test. We conjecture that being relatively old in a peer group may create long term advantages which can create a significant role in the ability to succeed in a highly competitive environment like the race for top political offices in the USA. The magnitude of the effect we estimate is larger than what most other studies on the relative age effect for a broader (adult) population find, but is in general in line with studies that look at populations in high-competition environments.
Resumo:
Communication processes are vital in the lifecycle of BPM projects. With this in mind, much research has been performed into facilitating this key component between stakeholders. Amongst the methods used to support this process are personalized process visualisations. In this paper, we review the development of this visualization trend, then, we propose a theoretical analysis framework based upon communication theory. We use this framework to provide theoretical support to the conjecture that 3D virtual worlds are powerful tools for communicating personalised visualisations of processes within a workplace. Meta requirements are then derived and applied, via 3D virtual world functionalities, to generate example visualisations containing personalized aspects, which we believe enhance the process of communcation between analysts and stakeholders in BPM process (re)design activities.
Resumo:
While social enterprises have gained increasing policy attention as vehicles for generating innovative responses to complex social and environmental problems, surprisingly little is known about them. In particular, the social innovation produced by social enterprises (Mulgan, Tucker, Ali, & Sander, 2007) has been presumed rather than demonstrated, and remains under-investigated in the literature. While social enterprises are held to be inherently innovative as they seek to response to social needs (Nicholls, 2010), there has been conjecture that the collaborative governance arrangements typical in social enterprises may be conducive to innovation (Lumpkin, Moss, Gras, Kato, & Amezcua, In press), as members and volunteers provide a source of creative ideas and are unfettered in such thinking by responsibility to deliver organisational outcomes (Hendry, 2004). However this is complicated by the sheer array of governance arrangements which exist in social enterprises, which range from flat participatory democratic structures through to hierarchical arrangements. In continental Europe, there has been a stronger focus on democratic participation as a characteristic of Social Enterprises than, for example, the USA. In response to this gap in knowledge, a research project was undertaken to identify the population of social enterprises in Australia. The size, composition and the social innovations initiated by these enterprises has been reported elsewhere (see Barraket, 2010). The purpose of this paper is to undertake a closer examination of innovation in social enterprises – particularly how the collaborative governance of social enterprises might influence innovation. Given the pre-paradigmatic state of social entrepreneurship research (Nicholls, 2010), and the importance of drawing draw on established theories in order to advance theory (Short, Moss, & Lumpkin, 2009), a number of conceptual steps are needed in order to examine how collaborative governance might influence by social enterprises. In this paper, we commence by advancing a definition as to what a social enterprise is. In light of our focus on the potential role of collaborative governance in social innovation amongst social enterprises, we go on to consider the collaborative forms of governance prevalent in the Third Sector. Then, collaborative innovation is explored. Drawing on this information and our research data, we finally consider how collaborative governance might affect innovation amongst social enterprises.
Resumo:
Recent advances in the area of ‘Transformational Government’ position the citizen at the centre of focus. This paradigm shift from a department-centric to a citizen-centric focus requires governments to re-think their approach to service delivery, thereby decreasing costs and increasing citizen satisfaction. The introduction of franchises as a virtual business layer between the departments and their citizens is intended to provide a solution. Franchises are structured to address the needs of citizens independent of internal departmental structures. For delivering services online, governments pursue the development of a One-Stop Portal, which structures information and services through those franchises. Thus, each franchise can be mapped to a specific service bundle, which groups together services that are deemed to be of relevance to a specific citizen need. This study focuses on the development and evaluation of these service bundles. In particular, two research questions guide the line of investigation of this study: Research Question 1): What methods can be used by governments to identify service bundles as part of governmental One-Stop Portals? Research Question 2): How can the quality of service bundles in governmental One-Stop Portals be evaluated? The first research question asks about the identification of suitable service bundle identification methods. A literature review was conducted, to, initially, conceptualise the service bundling task, in general. As a consequence, a 4-layer model of service bundling and a morphological box were created, detailing characteristics that are of relevance when identifying service bundles. Furthermore, a literature review of Decision-Support Systems was conducted to identify approaches of relevance in different bundling scenarios. These initial findings were complemented by targeted studies of multiple leading governments in the e-government domain, as well as with a local expert in the field. Here, the aim was to identify the current status of online service delivery and service bundling in practice. These findings led to the conceptualising of two service bundle identification methods, applicable in the context of Queensland Government: On the one hand, a provider-driven approach, based on service description languages, attributes, and relationships between services was conceptualised. As well, a citizen-driven approach, based on analysing the outcomes from content identification and grouping workshops with citizens, was also conceptualised. Both methods were then applied and evaluated in practice. The conceptualisation of the provider-driven method for service bundling required the initial specification of relevant attributes that could be used to identify similarities between services called relationships; these relationships then formed the basis for the identification of service bundles. This study conceptualised and defined seven relationships, namely ‘Co-location’, ‘Resource’, ‘Co-occurrence’, ‘Event’, ‘Consumer’, ‘Provider’, and ‘Type’. The relationships, and the bundling method itself, were applied and refined as part of six Action Research cycles in collaboration with the Queensland Government. The findings show that attributes and relationships can be used effectively as a means for bundle identification, if distinct decision rules are in place to prescribe how services are to be identified. For the conceptualisation of the citizen-driven method, insights from the case studies led to the decision to involve citizens, through card sorting activities. Based on an initial list of services, relevant for a certain franchise, participating citizens grouped services according to their liking. The card sorting activity, as well as the required analysis and aggregation of the individual card sorting results, was analysed in depth as part of this study. A framework was developed that can be used as a decision-support tool to assist with the decision of what card sorting analysis method should be utilised in a given scenario. The characteristic features associated with card sorting in a government context led to the decision to utilise statistical analysis approaches, such as cluster analysis and factor analysis, to aggregate card sorting results. The second research question asks how the quality of service bundles can be assessed. An extensive literature review was conducted focussing on bundle, portal, and e-service quality. It was found that different studies use different constructs, terminology, and units of analysis, which makes comparing these models a difficult task. As a direct result, a framework was conceptualised, that can be used to position past and future studies in this research domain. Complementing the literature review, interviews conducted as part of the case studies with leaders in e-government, indicated that, typically, satisfaction is evaluated for the overall portal once the portal is online, but quality tests are not conducted during the development phase. Consequently, a research model which appropriately defines perceived service bundle quality would need to be developed from scratch. Based on existing theory, such as Theory of Reasoned Action, Expectation Confirmation Theory, and Theory of Affordances, perceived service bundle quality was defined as an inferential belief. Perceived service bundle quality was positioned within the nomological net of services. Based on the literature analysis on quality, and on the subsequent work of a focus group, the hypothesised antecedents (descriptive beliefs) of the construct and the associated question items were defined and the research model conceptualised. The model was then tested, refined, and finally validated during six Action Research cycles. Results show no significant difference in higher quality or higher satisfaction among users for either the provider-driven method or for the citizen-driven method. The decision on which method to choose, it was found, should be based on contextual factors, such as objectives, resources, and the need for visibility. The constructs of the bundle quality model were examined. While the quality of bundles identified through the citizen-centric approach could be explained through the constructs ‘Navigation’, ‘Ease of Understanding’, and ‘Organisation’, bundles identified through the provider-driven approach could be explained solely through the constructs ‘Navigation’ and ‘Ease of Understanding’. An active labelling style for bundles, as part of the provider-driven Information Architecture, had a larger impact on ‘Quality’ than the topical labelling style used in the citizen-centric Information Architecture. However, ‘Organisation’, reflecting the internal, logical structure of the Information Architecture, was a significant factor impacting on ‘Quality’ only in the citizen-driven Information Architecture. Hence, it was concluded that active labelling can compensate for a lack of logical structure. Further studies are needed to further test this conjecture. Such studies may involve building alternative models and conducting additional empirical research (e.g. use of an active labelling style for the citizen-driven Information Architecture). This thesis contributes to the body of knowledge in several ways. Firstly, it presents an empirically validated model of the factors explaining and predicting a citizen’s perception of service bundle quality. Secondly, it provides two alternative methods that can be used by governments to identify service bundles in structuring the content of a One-Stop Portal. Thirdly, this thesis provides a detailed narrative to suggest how the recent paradigm shift in the public domain, towards a citizen-centric focus, can be pursued by governments; the research methodology followed by this study can serve as an exemplar for governments seeking to achieve a citizen-centric approach to service delivery.
Resumo:
NaAlH4 and LiBH4 are potential candidate materials for mobile hydrogen storage applications, yet they have the drawback of being highly stable and desorbing hydrogen only at elevated temperatures. In this letter, ab initio density functional theory calculations reveal how the stabilities of the AlH4 and BH4 complex anions will be affected by reducing net anionic charge. Tetrahedral AlH4 and BH4 complexes are found to be distorted with the decrease of negative charge. One H-H distance becomes smaller and the charge density will overlap between them at a small anion charge. The activation energies to release of H2 from AlH4 and BH4 complexes are thus greatly decreased. We demonstrate that point defects such as neutral Na vacancies or substitution of a Na atom with Ti on the NaAlH4(001) surface can potentially cause strong distortion of neighboring AlH4 complexes and even induce spontaneous dehydrogenation. Our results help to rationalize the conjecture that the suppression of charge transfer to AlH4 and BH4 anion as a consequence of surface defects should be very effective for improving the recycling performance of H2 in NaAlH4 and LiBH4. The understanding gained here will aid in the rational design and development of hydrogen storage materials based on these two systems.
Resumo:
Carbonatites are known to contain the highest concentrations of rare-earth elements (REE) among all igneous rocks. The REE distribution of carbonatites is commonly believed to be controlled by that of the rock forming Ca minerals (i.e., calcite, dolomite, and ankerite) and apatite because of their high modal content and tolerance for the substitution of Ca by light REE (LREE). Contrary to this conjecture, calcite from the Miaoya carbonatite (China), analyzed in situ by laser-ablation inductively-coupled-plasma mass-spectrometry, is characterized by low REE contents (100–260 ppm) and relatively !at chondrite-normalized REE distribution patterns [average (La/Yb)CN=1.6]. The carbonatite contains abundant REE-rich minerals, including monazite and !uorapatite, both precipitated earlier than the REE-poor calcite, and REE-fluorocarbonates that postdated the calcite. Hydrothermal REE-bearing !uorite and barite veins are not observed at Miaoya. The textural and analytical evidence indicates that the initially high concentrations of REE and P in the carbonatitic magma facilitated early precipitation of REE-rich phosphates. Subsequent crystallization of REE-poor calcite led to enrichment of the residual liquid in REE, particularly LREE. This implies that REE are generally incompatible with respect to calcite and the calcite/melt partition coefficients for heavy REE (HREE) are significantly greater than those for LREE. Precipitation of REE-fluorocarbonates late in the evolutionary history resulted in depletion of the residual liquid in LREE, as manifested by the development of HREE-enriched late-stage calcite [(La/Yb)CN=0.7] in syenites associated with the carbonatite. The observed variations of REE distribution between calcite and whole rocks are interpreted to arise from multistage fractional crystallization (phosphates!calcite!REE-!uorocarbonates) from an initially REE-rich carbonatitic liquid.
Resumo:
This chapter presents an historical narrative on the recent evolution of information and communications technology (ICT) that has been, and is, utilized for purposes of learning. In other words, it presents an account of the development of e-learning supported through the Web and other similar virtual environments. It does not attempt to present a definitive account; as such an exercise is fraught with assumptions, contextual bias, and probable conjecture. The concern here is more with contextualizing the role of inquiry in learning and the evolving digital tools that enable interfaces that promote and support it. In tracking this evolution, both multi-disciplinary and trans-disciplinary research has been pursued. Key historical developments are identified as well as interpretations of the key drivers of e-learning over time and into what might be better described as digital learning. Innovations in the development of digital tools are described as dynamic and emergent, evolving as a consequence of multiple, sometimes hidden drivers of change. But conflating advancements in learning technologies with e-learning seems to be pervasive. As is the push for the “open” agenda – a growing number of initiatives and movements dominated by themes associated with access, intellectual property, public benefit, sharing and technical interoperability. Openness is also explored in this chapter, however, more in terms of what it means when associated with inquiry. By investigating opportunities for the stimulation and support of questioning online – in particular, why-questioning – this chapter is focused on “opening” content – not just for access but for inquiry and deeper learning.
Resumo:
There is little conjecture that quality teaching is essential to student achievement and well-being. Whilst much has been written about the importance of quality teaching, including the link to pre-service teacher education, to date there has been little investigation into specific pedagogical practices that can enhance quality teaching dimensions within a pre-service teacher education programme. This paper reports on a small-scale qualitative research study, undertaken in an Australian university, which linked the fields of quality teaching, pre-service teacher education and values education. The study followed the journey of five pre-service teacher education students as they undertook their second field experience unit where the focus was centred on the values-based pedagogy of Philosophy in the Classroom. The research findings, collected via interviews, demonstrated that an explicit values-based pedagogy can have a positive impact on the development of quality teaching dimensions. This new knowledge has potential for further research into examining the ways quality teaching dimensions are gained and practised by pre-service teacher education students and these findings and recommendations are discussed in this paper.
Resumo:
In a standard overlapping generations growth model, with a fixed amount of land and endogenous fertility, the competitive economy converges to a steady state with a zero population growth rate and positive consumption per capita. The Malthusian hypothesis is interpreted as a positive statement about the relationship between population growth and consumption per-capita, when production exhibits diminishing returns to labor and there is a fixed amount of land essential for production. Even when individuals care only about the number of their children and not about their children's welfare, the equilibrium is such that they eventually would choose to have only one child for each adult. Hence, if Malthus's "positive check' on population is the result of the response of optimizing agents to competitively determined prices, Malthus's pessimistic conjecture is not necessarily true, even though his other assumptions hold. -from Authors