994 resultados para problem complexity
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Quality inspection and assurance is a veryimportant step when today's products are sold to markets. As products are produced in vast quantities, the interest to automate quality inspection tasks has increased correspondingly. Quality inspection tasks usuallyrequire the detection of deficiencies, defined as irregularities in this thesis. Objects containing regular patterns appear quite frequently on certain industries and science, e.g. half-tone raster patterns in the printing industry, crystal lattice structures in solid state physics and solder joints and components in the electronics industry. In this thesis, the problem of regular patterns and irregularities is described in analytical form and three different detection methods are proposed. All the methods are based on characteristics of Fourier transform to represent regular information compactly. Fourier transform enables the separation of regular and irregular parts of an image but the three methods presented are shown to differ in generality and computational complexity. Need to detect fine and sparse details is common in quality inspection tasks, e.g., locating smallfractures in components in the electronics industry or detecting tearing from paper samples in the printing industry. In this thesis, a general definition of such details is given by defining sufficient statistical properties in the histogram domain. The analytical definition allowsa quantitative comparison of methods designed for detail detection. Based on the definition, the utilisation of existing thresholding methodsis shown to be well motivated. Comparison of thresholding methods shows that minimum error thresholding outperforms other standard methods. The results are successfully applied to a paper printability and runnability inspection setup. Missing dots from a repeating raster pattern are detected from Heliotest strips and small surface defects from IGT picking papers.
Resumo:
There is a broad consensus among economists that technologicalchange has been a major contributor to the productivity growth and, hence, to the growth of the material welfare in western industrialized countries at least over the last century. Paradoxically, this issue has not been the focal point of theoretical economics. At the same time, we have witnessed the rise of the importance of technological issues at the strategic management level of business firms. Interestingly, the research has not accurately responded to this challenge either. The tension between the overwhelming empirical evidence of the importance of technology and its relative omission in the research offers a challenging target for a methodological endeavor. This study deals with the question of how different theories cope with technology and explain technological change. The focusis at the firm level and the analysis concentrates on metatheoretical issues, except for the last two chapters, which examine the problems of strategic management of technology. Here the aim is to build a new evolutionary-based theoreticalframework to analyze innovation processes at the firm level. The study consistsof ten chapters. Chapter 1 poses the research problem and contrasts the two basic approaches, neoclassical and evolutionary, to be analyzed. Chapter 2 introduces the methodological framework which is based on the methodology of isolation. Methodological and ontoogical commitments of the rival approaches are revealed and basic questions concerning their ways of theorizing are elaborated. Chapters 3-6 deal with the so-called substantive isolative criteria. The aim is to examine how different approaches cope with such critical issues as inherent uncertainty and complexity of innovative activities (cognitive isolations, chapter 3), theboundedness of rationality of innovating agents (behavioral isolations, chapter4), the multidimensional nature of technology (chapter 5), and governance costsrelated to technology (chapter 6). Chapters 7 and 8 put all these things together and look at the explanatory structures used by the neoclassical and evolutionary approaches in the light of substantive isolations. The last two cpahters of the study utilize the methodological framework and tools to appraise different economics-based candidates in the context of strategic management of technology. The aim is to analyze how different approaches answer the fundamental question: How can firms gain competitive advantages through innovations and how can the rents appropriated from successful innovations be sustained? The last chapter introduces a new evolutionary-based technology management framework. Also the largely omitted issues of entrepreneurship are examined.
Resumo:
In this paper we consider a sequential allocation problem with n individuals. The first individual can consume any amount of some endowment leaving the remaining for the second individual, and so on. Motivated by the limitations associated with the cooperative or non-cooperative solutions we propose a new approach. We establish some axioms that should be satisfied, representativeness, impartiality, etc. The result is a unique asymptotic allocation rule. It is shown for n = 2; 3; 4; and a claim is made for general n. We show that it satisfies a set of desirable properties. Key words: Sequential allocation rule, River sharing problem, Cooperative and non-cooperative games, Dictator and ultimatum games. JEL classification: C79, D63, D74.
Resumo:
Despite global environmental governance has traditionally couched global warming in terms of annual CO2 emissions (a flow), global mean temperature is actually determined by cumulative CO2 emissions in the atmosphere (a stock). Thanks to advances of scientific community, nowadays it is possible to quantify the \global carbon budget", that is, the amount of available cumulative CO2 emissions before crossing the 2oC threshold (Meinshausen et al., 2009). The current approach proposes to analyze the allocation of such global carbon budget among countries as a classical conflicting claims problem (O'Neill, 1982). Based on some appealing principles, it is proposed an efficient and sustainable allocation of the available carbon budget from 2000 to 2050 taking into account different environmental risk scenarios. Keywords: Carbon budget, Conflicting claims problem, Distribution, Climate change. JEL classification: C79, D71, D74, H41, H87, Q50, Q54, Q58.
Resumo:
We prove that there are one-parameter families of planar differential equations for which the center problem has a trivial solution and on the other hand the cyclicity of the weak focus is arbitrarily high. We illustrate this phenomenon in several examples for which this cyclicity is computed.
Resumo:
A Fundamentals of Computing Theory course involves different topics that are core to the Computer Science curricula and whose level of abstraction makes them difficult both to teach and to learn. Such difficulty stems from the complexity of the abstract notions involved and the required mathematical background. Surveys conducted among our students showed that many of them were applying some theoretical concepts mechanically rather than developing significant learning. This paper shows a number of didactic strategies that we introduced in the Fundamentals of Computing Theory curricula to cope with the above problem. The proposed strategies were based on a stronger use of technology and a constructivist approach. The final goal was to promote more significant learning of the course topics.
Resumo:
La fabrication, la distribution et l'usage de fausses pièces d'identité constituent une menace pour la sécurité autant publique que privée. Ces faux documents représentent en effet un catalyseur pour une multitude de formes de criminalité, des plus anodines aux formes les plus graves et organisées. La dimension, la complexité, la faible visibilité, ainsi que les caractères répétitif et évolutif de la fraude aux documents d'identité appellent des réponses nouvelles qui vont au-delà d'une approche traditionnelle au cas par cas ou de la stratégie du tout technologique dont la perspective historique révèle l'échec. Ces nouvelles réponses passent par un renforcement de la capacité de comprendre les problèmes criminels que posent la fraude aux documents d'identité et les phénomènes qui l'animent. Cette compréhension est tout bonnement nécessaire pour permettre d'imaginer, d'évaluer et de décider les solutions et mesures les plus appropriées. Elle requière de développer les capacités d'analyse et la fonction de renseignement criminel qui fondent en particulier les modèles d'action de sécurité les plus récents, tels que l'intelligence-led policing ou le problem-oriented policing par exemple. Dans ce contexte, le travail doctoral adopte une position originale en postulant que les fausses pièces d'identité se conçoivent utilement comme la trace matérielle ou le vestige résultant de l'activité de fabrication ou d'altération d'un document d'identité menée par les faussaires. Sur la base de ce postulat fondamental, il est avancé que l'exploitation scientifique, méthodique et systématique de ces traces au travers d'un processus de renseignement forensique permet de générer des connaissances phénoménologiques sur les formes de criminalité qui fabriquent, diffusent ou utilisent les fausses pièces d'identité, connaissances qui s'intègrent et se mettent avantageusement au service du renseignement criminel. A l'appui de l'épreuve de cette thèse de départ et de l'étude plus générale du renseignement forensique, le travail doctoral propose des définitions et des modèles. Il décrit des nouvelles méthodes de profilage et initie la constitution d'un catalogue de formes d'analyses. Il recourt également à des expérimentations et des études de cas. Les résultats obtenus démontrent que le traitement systématique de la donnée forensique apporte une contribution utile et pertinente pour le renseignement criminel stratégique, opérationnel et tactique, ou encore la criminologie. Combiné aux informations disponibles par ailleurs, le renseignement forensique produit est susceptible de soutenir l'action de sécurité dans ses dimensions répressive, proactive, préventive et de contrôle. En particulier, les méthodes de profilage des fausses pièces d'identité proposées permettent de révéler des tendances au travers de jeux de données étendus, d'analyser des modus operandi ou d'inférer une communauté ou différence de source. Ces méthodes appuient des moyens de détection et de suivi des séries, des problèmes et des phénomènes criminels qui s'intègrent dans le cadre de la veille opérationnelle. Ils permettent de regrouper par problèmes les cas isolés, de mettre en évidence les formes organisées de criminalité qui méritent le plus d'attention, ou de produire des connaissances robustes et inédites qui offrent une perception plus profonde de la criminalité. Le travail discute également les difficultés associées à la gestion de données et d'informations propres à différents niveaux de généralité, ou les difficultés relatives à l'implémentation du processus de renseignement forensique dans la pratique. Ce travail doctoral porte en premier lieu sur les fausses pièces d'identité et leur traitement par les protagonistes de l'action de sécurité. Au travers d'une démarche inductive, il procède également à une généralisation qui souligne que les observations ci-dessus ne valent pas uniquement pour le traitement systématique des fausses pièces d'identité, mais pour celui de tout type de trace dès lors qu'un profil en est extrait. Il ressort de ces travaux une définition et une compréhension plus transversales de la notion et de la fonction de renseignement forensique. The production, distribution and use of false identity documents constitute a threat to both public and private security. Fraudulent documents are a catalyser for a multitude of crimes, from the most trivial to the most serious and organised forms. The dimension, complexity, low visibility as well as the repetitive and evolving character of the production and use of false identity documents call for new solutions that go beyond the traditional case-by-case approach, or the technology-focused strategy whose failure is revealed by the historic perspective. These new solutions require to strengthen the ability to understand crime phenomena and crime problems posed by false identity documents. Such an understanding is pivotal in order to be able to imagine, evaluate and decide on the most appropriate measures and responses. Therefore, analysis capacities and crime intelligence functions, which found the most recent policing models such as intelligence-led policing or problem-oriented policing for instance, have to be developed. In this context, the doctoral research work adopts an original position by postulating that false identity documents can be usefully perceived as the material remnant resulting from the criminal activity undertook by forgers, namely the manufacture or the modification of identity documents. Based on this fundamental postulate, it is proposed that a scientific, methodical and systematic processing of these traces through a forensic intelligence approach can generate phenomenological knowledge on the forms of crime that produce, distribute and use false identity documents. Such knowledge should integrate and serve advantageously crime intelligence efforts. In support of this original thesis and of a more general study of forensic intelligence, the doctoral work proposes definitions and models. It describes new profiling methods and initiates the construction of a catalogue of analysis forms. It also leverages experimentations and case studies. Results demonstrate that the systematic processing of forensic data usefully and relevantly contributes to strategic, tactical and operational crime intelligence, and also to criminology. Combined with alternative information available, forensic intelligence may support policing in its repressive, proactive, preventive and control activities. In particular, the proposed profiling methods enable to reveal trends among extended datasets, to analyse modus operandi, or to infer that false identity documents have a common or different source. These methods support the detection and follow-up of crime series, crime problems and phenomena and therefore contribute to crime monitoring efforts. They enable to link and regroup by problems cases that were previously viewed as isolated, to highlight organised forms of crime which deserve greatest attention, and to elicit robust and novel knowledge offering a deeper perception of crime. The doctoral research work discusses also difficulties associated with the management of data and information relating to different levels of generality, or difficulties associated with the implementation in practice of the forensic intelligence process. The doctoral work focuses primarily on false identity documents and their treatment by policing stakeholders. However, through an inductive process, it makes a generalisation which underlines that observations do not only apply to false identity documents but to any kind of trace as soon as a profile is extracted. A more transversal definition and understanding of the concept and function of forensic intelligence therefore derives from the doctoral work.
Resumo:
This empirical study consists in an investigation of the effects, on the development of Information Problem Solving (IPS) skills, of a long-term embedded, structured and supported instruction in Secondary Education. Forty secondary students of 7th and 8th grades (13–15 years old) participated in the 2-year IPS instruction designed in this study. Twenty of them participated in the IPS instruction, and the remaining twenty were the control group. All the students were pre- and post-tested in their regular classrooms, and their IPS process and performance were logged by means of screen capture software, to warrant their ecological validity. The IPS constituent skills, the web search sub-skills and the answers given by each participant were analyzed. The main findings of our study suggested that experimental students showed a more expert pattern than the control students regarding the constituent skill ‘defining the problem’ and the following two web search sub-skills: ‘search terms’ typed in a search engine, and ‘selected results’ from a SERP. In addition, scores of task performance were statistically better in experimental students than in control group students. The paper contributes to the discussion of how well-designed and well-embedded scaffolds could be designed in instructional programs in order to guarantee the development and efficiency of the students’ IPS skills by using net information better and participating fully in the global knowledge society.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
In this paper we describe a taxonomy of task demands which distinguishes between Task Complexity, Task Condition and Task Difficulty. We then describe three theoretical claims and predictions of the Cognition Hypothesis (Robinson 2001, 2003b, 2005a) concerning the effects of task complexity on: (a) language production; (b) interaction and uptake of information available in the input to tasks; and (c) individual differences-task interactions. Finally we summarize the findings of the empirical studies in this special issue which all address one or more of these predictions and point to some directions for continuing, future research into the effects of task complexity on learning and performance.