961 resultados para Complexity science
Resumo:
This thesis was a step forward in extracting valuable features from human's movement behaviour in terms of space utilisation based on Media-Access-Control data. This research offered a low-cost and less computational complexity approach compared to existing human's movement tracking methods. This research was successfully applied in QUT's Gardens Point campus and can be scaled to bigger environments and societies. Extractable information from human's movement by this approach can add a significant value to studying human's movement behaviour, enhancing future urban and interior design, improving crowd safety and evacuation plans.
Resumo:
Is there a crisis in Australian science and mathematics education? Declining enrolments in upper secondary Science and Mathematics courses have gained much attention from the media, politicians and high-profile scientists over the last few years, yet there is no consensus amongst stakeholders about either the nature or the magnitude of the changes. We have collected raw enrolment data from the education departments of each of the Australian states and territories from 1992 to 2012 and analysed the trends for Biology, Chemistry, Physics, two composite subject groups (Earth Sciences and Multidisciplinary Sciences), as well as entry, intermediate and advanced Mathematics. The results of these analyses are discussed in terms of participation rates, raw enrolments and gender balance. We have found that the total number of students in Year 12 increased by around 16% from 1992 to 2012 while the participation rates for most Science and Mathematics subjects, as a proportion of the total Year 12 cohort, fell (Biology (-10%), Chemistry (-5%), Physics (-7%), Multidisciplinary Science (-5%), intermediate Mathematics (-11%), advanced Mathematics (-7%) in the same period. There were increased participation rates in Earth Sciences (+0.3%) and entry Mathematics (+11%). In each case the greatest rates of change occurred prior to 2001 and have been slower and steadier since. We propose that the broadening of curriculum offerings, further driven by students' self-perception of ability and perceptions of subject difficulty and usefulness, are the most likely cause of the changes in participation. While these continuing declines may not amount to a crisis, there is undoubtedly serious cause for concern.
Resumo:
This paper presents algebraic attacks on SOBER-t32 and SOBER-t16 without stuttering. For unstuttered SOBER-t32, two different attacks are implemented. In the first attack, we obtain multivariate equations of degree 10. Then, an algebraic attack is developed using a collection of output bits whose relation to the initial state of the LFSR can be described by low-degree equations. The resulting system of equations contains 2^69 equations and monomials, which can be solved using the Gaussian elimination with the complexity of 2^196.5. For the second attack, we build a multivariate equation of degree 14. We focus on the property of the equation that the monomials which are combined with output bit are linear. By applying the Berlekamp-Massey algorithm, we can obtain a system of linear equations and the initial states of the LFSR can be recovered. The complexity of attack is around O(2^100) with 2^92 keystream observations. The second algebraic attack is applicable to SOBER-t16 without stuttering. The attack takes around O(2^85) CPU clocks with 2^78 keystream observations.
Resumo:
In this paper we analyse the role of some of the building blocks of SHA-256. We show that the disturbance-correction strategy is applicable to the SHA-256 architecture and we prove that functions Σ, σ are vital for the security of SHA-256 by showing that for a variant without them it is possible to find collisions with complexity 2^64 hash operations. As a step towards an analysis of the full function, we present the results of our experiments on Hamming weights of expanded messages for different variants of the message expansion and show that there exist low-weight expanded messages for XOR-linearised variants.
Resumo:
Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small S-boxes interconnected by linear key-dependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds N r r. In this paper we study the security of such ciphers under an additional hypothesis: the S-box can be described by an overdefined system of algebraic equations (true with probability 1). We show that this is true for both Serpent (due to a small size of S-boxes) and Rijndael (due to unexpected algebraic properties). We study general methods known for solving overdefined systems of equations, such as XL from Eurocrypt’00, and show their inefficiency. Then we introduce a new method called XSL that uses the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly with the number of rounds. The XSL attack would then be polynomial (or subexponential) in N r> , with a huge constant that is double-exponential in the size of the S-box. The exact complexity of such attacks is not known due to the redundant equations. Though the presented version of the XSL attack always gives always more than the exhaustive search for Rijndael, it seems to (marginally) break 256-bit Serpent. We suggest a new criterion for design of S-boxes in block ciphers: they should not be describable by a system of polynomial equations that is too small or too overdefined.
Resumo:
In this paper we present a cryptanalysis of a new 256-bit hash function, FORK-256, proposed by Hong et al. at FSE 2006. This cryptanalysis is based on some unexpected differentials existing for the step transformation. We show their possible uses in different attack scenarios by giving a 1-bit (resp. 2-bit) near collision attack against the full compression function of FORK-256 running with complexity of 2^125 (resp. 2^120) and with negligible memory, and by exhibiting a 22-bit near pseudo-collision. We also show that we can find collisions for the full compression function with a small amount of memory with complexity not exceeding 2^126.6 hash evaluations. We further show how to reduce this complexity to 2^109.6 hash computations by using 273 memory words. Finally, we show that this attack can be extended with no additional cost to find collisions for the full hash function, i.e. with the predefined IV.
Resumo:
Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as “social engineering.” Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users’ susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
Resumo:
Contextual factors for sustainable development such as population growth, energy, and resource availability and consumption levels, food production yield, and growth in pollution, provide numerous complex and rapidly changing education and training requirements for a variety of professions including engineering. Furthermore, these requirements may not be clearly understood or expressed by designers, governments, professional bodies or the industry. Within this context, this paper focuses on one priority area for greening the economy through sustainable development—improving energy efficiency—and discusses the complexity of capacity building needs for professionals. The paper begins by acknowledging the historical evolution of sustainability considerations, and the complexity embedded in built environment solutions. The authors propose a dual-track approach to building capacity building, with a short-term focus on improvement (i.e., making peaking challenges a priority for postgraduate education), and a long-term focus on transformational innovation (i.e., making tailing challenges a priority for undergraduate education). A case study is provided, of Australian experiences over the last decade with regard to the topic area of energy efficiency. The authors conclude with reflections on implications for the approach.
Resumo:
This review paper presents historical perspectives, recent advances and future directions in the multidisciplinary research field of plasma nanoscience. The current status and future challenges are presented using a three-dimensional framework. The first and the largest dimension covers the most important classes of nanoscale objects (nanostructures, nanofeatures and nanoassemblies/nanoarchitectures) and materials systems, namely carbon nanotubes, nanofibres, graphene, graphene nanoribbons, graphene nanoflakes, nanodiamond and related carbon-based nanostructures; metal, silicon and other inorganic nanoparticles and nanostructures; soft organic nanomaterials; nano-biomaterials; biological objects and nanoscale plasma etching. In the second dimension, we discuss the most common types of plasmas and plasma reactors used in nanoscale plasma synthesis and processing. These include low-temperature non-equilibrium plasmas at low and high pressures, thermal plasmas, high-pressure microplasmas, plasmas in liquids and plasma–liquid interactions, high-energy-density plasmas, and ionized physical vapour deposition as well as some other plasma-enhanced nanofabrication techniques. In the third dimension, we outline some of the 'Grand Science Challenges' and 'Grand Socio-economic Challenges' to which significant contributions from plasma nanoscience-related research can be expected in the near future. The urgent need for a stronger focus on practical, outcome-oriented research to tackle the grand challenges is emphasized and concisely formulated as from controlled complexity to practical simplicity in solving grand challenges.
Resumo:
The results of the combined experimental and numerical study suggest that nonequilibrium plasma-driven self-organization leads to better size and positional uniformity of nickel nanodot arrays on a Si(100) surface compared with neutral gas-based processes under similar conditions. This phenomenon is explained by introducing the absorption zone patterns, whose areas relative to the small nanodot sizes become larger when the surface is charged. Our results suggest that strongly nonequilibrium and higher-complexity plasma systems can be used to improve ordering and size uniformity in nanodot arrays of various materials, a common and seemingly irresolvable problem in self-organized systems of small nanoparticles. © 2008 American Institute of Physics.
Resumo:
This article introduces a deterministic approach to using low-temperature, thermally non-equilibrium plasmas to synthesize delicate low-dimensional nanostructures of a small number of atoms on plasma exposed surfaces. This approach is based on a set of plasma-related strategies to control elementary surface processes, an area traditionally covered by surface science. Major issues related to balanced delivery and consumption of building units, appropriate choice of process conditions, and account of plasma-related electric fields, electric charges and polarization effects are identified and discussed in the quantum dot nanoarray context. Examples of a suitable plasma-aided nanofabrication facility and specific effects of a plasma-based environment on self-organized growth of size- and position-uniform nanodot arrays are shown. These results suggest a very positive outlook for using low-temperature plasma-based nanotools in high-precision nanofabrication of self-assembled nanostructures and elements of nanodevices, one of the areas of continuously rising demand from academia and industry.
Resumo:
The role of emotion during learning encounters in science teacher education is under-researched and under-theorized. In this case study we explore the emotional climates, that is, the collective states of emotional arousal, of a preservice secondary science education class to illuminate practice for producing and reproducing high quality learning experiences for preservice science teachers. Theories related to the sociology of emotions informed our analyses from data sources such as preservice teachers’ perceptions of the emotional climate of their class, emotional facial expressions, classroom conversations, and cogenerative dialogue. The major outcome from our analyses was that even though preservice teachers reported high positive emotional climate during the professor’s science demonstrations, they also valued the professor’s in the moment reflections on her teaching that were associated with low emotional climate ratings. We co-relate emotional climate data and preservice teachers’ comments during cogenerative dialogue to expand our understanding of high quality experiences and emotional climate in science teacher education. Our study also contributes refinements to research perspectives on emotional climate.
Resumo:
Multiscale hybrid simulations that bridge the nine-order-of-magnitude spatial gap between the macroscopic plasma nanotools and microscopic surface processes on nanostructured solids are described. Two specific examples of carbon nanotip-like and semiconductor quantum dot nanopatterns are considered. These simulations are instrumental in developing physical principles of nanoscale assembly processes on solid surfaces exposed to low-temperature plasmas.
Resumo:
This article presents a study of how humans perceive and judge the relevance of documents. Humans are adept at making reasonably robust and quick decisions about what information is relevant to them, despite the ever increasing complexity and volume of their surrounding information environment. The literature on document relevance has identified various dimensions of relevance (e.g., topicality, novelty, etc.), however little is understood about how these dimensions may interact. We performed a crowdsourced study of how human subjects judge two relevance dimensions in relation to document snippets retrieved from an internet search engine. The order of the judgment was controlled. For those judgments exhibiting an order effect, a q–test was performed to determine whether the order effects can be explained by a quantum decision model based on incompatible decision perspectives. Some evidence of incompatibility was found which suggests incompatible decision perspectives is appropriate for explaining interacting dimensions of relevance in such instances.
Resumo:
The usual practice to study a large power system is through digital computer simulation. However, the impact of large scale use of small distributed generators on a power network cannot be evaluated strictly by simulation since many of these components cannot be accurately modelled. Moreover, the network complexity makes the task of practical testing on a physical network nearly impossible. This study discusses the paradigm of interfacing a real-time simulation of a power system to real-life hardware devices. This type of splitting a network into two parts and running a real-time simulation with a physical system in parallel is usually termed as power-hardware-in-the-loop (PHIL) simulation. The hardware part is driven by a voltage source converter that amplifies the signals of the simulator. In this paper, the effects of suitable control strategy on the performance of PHIL and the associated stability aspects are analysed in detail. The analyses are validated through several experimental tests using an real-time digital simulator.