362 resultados para problem complexity
em Queensland University of Technology - ePrints Archive
Resumo:
This thesis aims at developing a better understanding of unstructured strategic decision making processes and the conditions for achieving successful decision outcomes. Specifically it focuses on the processes used to make CRE (Corporate Real Estate) decisions. The starting point for this thesis is that our knowledge of such processes is incomplete. A comprehensive study of the most recent CRE literature together with Behavioural Organization Theory has provided a research framework for the exploration of CRE recommended =best practice‘, and of how organizational variables impact on and shape these practices. To reveal the fundamental differences between CRE decision-making in practice and the prescriptive =best practice‘ advocated in the CRE literature, a study of seven Italian management consulting firms was undertaken addressing the aspects of content and process of decisions. This thesis makes its primary contribution by identifying the importance and difficulty of finding the right balance between problem complexity, process richness and cohesion to ensure a decision-making process that is sufficiently rich and yet quick enough to deliver a prompt outcome. While doing so, this research also provides more empirical evidence to some of the most established theories of decision-making while reinterpreting their mono-dimensional arguments in a multi-dimensional model of successful decision-making.
Resumo:
The author aims at developing a better understanding of unstructured strategic decision making processes and the conditions for achieving successful decision outcomes. Specifically he investigates the processes used to make CRE (Corporate Real Estate) decisions. To reveal the fundamental differences between CRE decision-making in practice and the prescriptive ‘best practice’ advocated in the CRE literature, a study of seven leading Italian management consulting firms is undertaken addressing the aspects of content and process of decisions. This research makes its primary contribution by identifying the importance and difficulty of finding the right balance between problem complexity, process richness and cohesion to ensure a decision-making process that is sufficiently rich and yet quick enough to deliver a prompt outcome. While doing so, the study also provides more empirical evidence to some of the most established theories of decision-making, while reinterpreting their mono-dimensional arguments in a multi-dimensional model of successful decision-making.
Resumo:
Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).
Resumo:
We introduce Kamouflage: a new architecture for building theft-resistant password managers. An attacker who steals a laptop or cell phone with a Kamouflage-based password manager is forced to carry out a considerable amount of online work before obtaining any user credentials. We implemented our proposal as a replacement for the built-in Firefox password manager, and provide performance measurements and the results from experiments with large real-world password sets to evaluate the feasibility and effectiveness of our approach. Kamouflage is well suited to become a standard architecture for password managers on mobile devices.
Resumo:
We introduce the notion of distributed password-based public-key cryptography, where a virtual high-entropy private key is implicitly defined as a concatenation of low-entropy passwords held in separate locations. The users can jointly perform private-key operations by exchanging messages over an arbitrary channel, based on their respective passwords, without ever sharing their passwords or reconstituting the key. Focusing on the case of ElGamal encryption as an example, we start by formally defining ideal functionalities for distributed public-key generation and virtual private-key computation in the UC model. We then construct efficient protocols that securely realize them in either the RO model (for efficiency) or the CRS model (for elegance). We conclude by showing that our distributed protocols generalize to a broad class of “discrete-log”-based public-key cryptosystems, which notably includes identity-based encryption. This opens the door to a powerful extension of IBE with a virtual PKG made of a group of people, each one memorizing a small portion of the master key.
Resumo:
We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show three different collision and near-collision attacks on the compression function. The first attack uses differences in the chaining values and the block counter and finds collisions with complexity 233. The second attack utilizes differences in the chaining values and salt and yields collisions with complexity 242. The final attack uses differences only in the chaining values to yield near-collisions with complexity 299. All our attacks are independent of the number of rounds in the compression function. We illustrate the first two attacks by showing examples of collisions and near-collisions.
Resumo:
Design Science is the process of solving ‘wicked problems’ through designing, developing, instantiating, and evaluating novel solutions (Hevner, March, Park and Ram, 2004). Wicked problems are described as agent finitude in combination with problem complexity and normative constraint (Farrell and Hooker, 2013). In Information Systems Design Science, determining that problems are ‘wicked’ differentiates Design Science research from Solutions Engineering (Winter, 2008) and is a necessary part of proving the relevance to Information Systems Design Science research (Hevner, 2007; Iivari, 2007). Problem complexity is characterised as many problem components with nested, dependent and co-dependent relationships interacting through multiple feedback and feed-forward loops. Farrell and Hooker (2013) specifically state for wicked problems “it will often be impossible to disentangle the consequences of specific actions from those of other co-occurring interactions”. This paper discusses the application of an Enterprise Information Architecture modelling technique to disentangle the wicked problem complexity for one case. It proposes that such a modelling technique can be applied to other wicked problems and can lay the foundations for proving relevancy to DSR, provide solution pathways for artefact development, and aid to substantiate those elements required to produce Design Theory.
Resumo:
Preface The 9th Australasian Conference on Information Security and Privacy (ACISP 2004) was held in Sydney, 13–15 July, 2004. The conference was sponsored by the Centre for Advanced Computing – Algorithms and Cryptography (ACAC), Information and Networked Security Systems Research (INSS), Macquarie University and the Australian Computer Society. The aims of the conference are to bring together researchers and practitioners working in areas of information security and privacy from universities, industry and government sectors. The conference program covered a range of aspects including cryptography, cryptanalysis, systems and network security. The program committee accepted 41 papers from 195 submissions. The reviewing process took six weeks and each paper was carefully evaluated by at least three members of the program committee. We appreciate the hard work of the members of the program committee and external referees who gave many hours of their valuable time. Of the accepted papers, there were nine from Korea, six from Australia, five each from Japan and the USA, three each from China and Singapore, two each from Canada and Switzerland, and one each from Belgium, France, Germany, Taiwan, The Netherlands and the UK. All the authors, whether or not their papers were accepted, made valued contributions to the conference. In addition to the contributed papers, Dr Arjen Lenstra gave an invited talk, entitled Likely and Unlikely Progress in Factoring. This year the program committee introduced the Best Student Paper Award. The winner of the prize for the Best Student Paper was Yan-Cheng Chang from Harvard University for his paper Single Database Private Information Retrieval with Logarithmic Communication. We would like to thank all the people involved in organizing this conference. In particular we would like to thank members of the organizing committee for their time and efforts, Andrina Brennan, Vijayakrishnan Pasupathinathan, Hartono Kurnio, Cecily Lenton, and members from ACAC and INSS.
Resumo:
We generalize the classical notion of Vapnik–Chernovenkis (VC) dimension to ordinal VC-dimension, in the context of logical learning paradigms. Logical learning paradigms encompass the numerical learning paradigms commonly studied in Inductive Inference. A logical learning paradigm is defined as a set W of structures over some vocabulary, and a set D of first-order formulas that represent data. The sets of models of ϕ in W, where ϕ varies over D, generate a natural topology W over W. We show that if D is closed under boolean operators, then the notion of ordinal VC-dimension offers a perfect characterization for the problem of predicting the truth of the members of D in a member of W, with an ordinal bound on the number of mistakes. This shows that the notion of VC-dimension has a natural interpretation in Inductive Inference, when cast into a logical setting. We also study the relationships between predictive complexity, selective complexity—a variation on predictive complexity—and mind change complexity. The assumptions that D is closed under boolean operators and that W is compact often play a crucial role to establish connections between these concepts. We then consider a computable setting with effective versions of the complexity measures, and show that the equivalence between ordinal VC-dimension and predictive complexity fails. More precisely, we prove that the effective ordinal VC-dimension of a paradigm can be defined when all other effective notions of complexity are undefined. On a better note, when W is compact, all effective notions of complexity are defined, though they are not related as in the noncomputable version of the framework.
Resumo:
Since the 1960s, numerous studies on problem solving have revealed the complexity of the domain and the difficulty in translating research findings into practice. The literature suggests that the impact of problem solving research on the mathematics curriculum has been limited. Furthermore, our accumulation of knowledge on the teaching of problem solving is lagging. In this first discussion paper we initially present a sketch of 50 years of research on mathematical problem solving. We then consider some factors that have held back problem solving research over the past decades and offer some directions for how we might advance the field. We stress the urgent need to take into account the nature of problem solving in various arenas of today’s world and to accordingly modernize our perspectives on the teaching and learning of problem solving and of mathematical content through problem solving. Substantive theory development is also long overdue—we show how new perspectives on the development of problem solving expertise can contribute to theory development in guiding the design of worthwhile learning activities. In particular, we explore a models and modeling perspective as an alternative to existing views on problem solving.
Resumo:
Objective Research is beginning to provide an indication of the co-occurring substance abuse and mental health needs for the driving under the influence (DUI) population. This study aimed to examine the extent of such psychiatric problems among a large sample size of DUI offenders entering treatment in Texas. Methods This is a study of 36,373 past year DUI clients and 308,714 non-past year DUI clients admitted to Texas treatment programs between 2005 and 2008. Data were obtained from the State's administrative dataset. Results Analysis indicated that non-past year DUI clients were more likely to present with more severe illicit substance use problems, while past year DUI clients were more likely to have a primary problem with alcohol. Nevertheless, a cannabis use problem was also found to be significantly associated with DUI recidivism in the last year. In regards to mental health status, a major finding was that depression was the most common psychiatric condition reported by DUI clients, including those with more than one DUI offence in the past year. This cohort also reported elevated levels of Bipolar Disorder compared to the general population, and such a diagnosis was also associated with an increased likelihood of not completing treatment. Additionally, female clients were more likely to be diagnosed with mental health problems than males, as well as more likely to be placed on medications at admission and more likely to have problems with methamphetamine, cocaine, and opiates. Conclusions DUI offenders are at an increased risk of experiencing comorbid psychiatric disorders, and thus, corresponding treatment programs need to cater for a range of mental health concerns that are likely to affect recidivism rates.
Resumo:
Early childhood teacher education programs have a responsibility, amongst many, to prepare teachers for decision-making on real world issues, such as child abuse and neglect. Their repertoire of skills can be enhanced by engaging with others, either face-to-face or online, in authentic problem-based learning. This paper draws on a study of early childhood student teachers who engaged in an authentic learning experience, which was to consider and to suggest how they would act upon a real-life case of child abuse encountered in an early childhood classroom in Queensland. This was the case of Toby (a pseudonym), who was suspected of being physically abused at home. Students drew upon relevant legislation, policy and resource materials to tackle Toby’s case. The paper provides evidence of students grappling with the complexity of a child abuse case and establishing, through collaboration with others, a proactive course of action. The paper has a dual focus. First, it discusses the pedagogical context in which early childhood student teachers deal with issues of child abuse and neglect in the course of their teacher education program. Second, it examines evidence of students engaging in collaborative problem-solving around issues of child abuse and neglect and teachers’ responsibilities, both legal and professional, to the children and families they work with. Early childhood policy-makers, practitioners and teacher educators are challenged to consider how early childhood teachers are best equipped to deal with child protection and early intervention.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.
Resumo:
Resolving a noted open problem, we show that the Undirected Feedback Vertex Set problem, parameterized by the size of the solution set of vertices, is in the parameterized complexity class Poly(k), that is, polynomial-time pre-processing is sufficient to reduce an initial problem instance (G, k) to a decision-equivalent simplified instance (G', k') where k' � k, and the number of vertices of G' is bounded by a polynomial function of k. Our main result shows an O(k11) kernelization bound.