252 resultados para Secret--Aspect psychologique--Enquêtes
Resumo:
Unsteady natural convection inside a triangular cavity has been studied in this study. The cavity is filled with a saturated porous medium with non-isothermal left inclined wall while the bottom surface is isothermally heated and the right inclined surface is isothermally cold. An internal heat generation is also considered which is dependent of the fluid temperature. The governing equations are solved numerically by finite element method. The Prandtl number of the fluid is considered as 0.7 (air) while the aspect ratio and the Rayleigh number are considered as 0.5 and 105 respectively. The effect of the porosity of the medium and heat generation on the fluid flow and heat transfer have been presented as a form of streamlines and isotherms. The rate of heat transfer through three surfaces of the enclosure is also presented.
Resumo:
Natural convection in a triangular enclosure subject to non-uniformly cooling at the inclined surfaces and uniformly heating at the base is investigated numerically. The numerical simulations of the unsteady flows over a range of Rayleigh numbers and aspect ratios are carried out using Finite Volume Method. Since the upper surface is cooled and the bottom surface is heated, the air flow in the enclosure is potentially unstable to Rayleigh Benard instability. It is revealed that the transient flow development in the enclosure can be classified into three distinct stages; an early stage, a transitional stage and a steady stage. It is also found that the flow inside the enclosure strongly depends on the governing parameters, Rayleigh number and aspect ratio. The asymmetric behaviour of the flow about the geometric centre line is discussed in detailed. The heat transfer through the roof and the ceiling as a form of Nusselt number is also reported in this study.
Resumo:
In the current economy, knowledge has been recognized to be a valuable organisational asset, a crucial factor that aids organisations to succeed in highly competitive environments. Many organisations have begun projects and special initiatives aimed at fostering better knowledge sharing amongst their employees. Not surprisingly, information technology (IT) has been a central element of many of these projects and initiatives, as the potential of emerging information technologies such as Web 2.0 for enabling the process of managing organisational knowledge is recognised. This technology could be used as a collaborative system for knowledge management (KM) within enterprises. Enterprise 2.0 is the application of Web 2.0 in an organisational context. Enterprise 2.0 technologies are web-based social software that facilitate collaboration, communication and information flow in a bidirectional manner: an essential aspect of organisational knowledge management. This chapter explains how Enterprise 2.0 technologies (Web 2.0 technologies within organisations) can support knowledge management. The chapter also explores how such technologies support the codifying (technology-centred) and social network (people-centred) approaches of KM, towards bridging the current gap between these two approaches.
Resumo:
This edited book brings together empirical studies of young people in paid employment from a variety of disciplinary perspectives and in different national settings. In the context of increasing youth labour market participation rates and debates about the value of early employment, it draws on multi-level analyses to reflect the complexity of the field. Each of the three sections of the book explores a key aspect of young people's employment: their experience of work, intersections between work and education, and the impact of other actors and institutions. The book contributes to broadening and strengthening knowledge about the opportunities and constraints that young people face during their formative experiences in the labour market.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
This is a practice-led project consisting of a historical novel Abduction and related exegesis. The novel is a third person intimate narrative set in the mid-nineteenth century and is based on actual events and persons caught up in, or furthering, the mass dispossession of small farmers in Scotland known as the ‘Clearances’. The narrative focuses on the situation in the Outer Hebrides and northern Scotland. It is based on documented facts leading up to a controversial trial in 1850 that arose because a twenty year old woman of the period (the central protagonist, Jess Mackenzie) eloped with a young farmer to escape her parent’s pressure to marry a rival suitor, himself a powerful lawyer and ‘factor’ at the centre of many of the Clearances. The young woman’s independent ideas were ahead of her time, and the decisions she made under great pressure were crucial in some dramatic events that unfolded in Scotland and later in the colony of Victoria, to which she and her new husband emigrated soon after the trial. The exegesis is composed of two unequal parts. It briefly considers the development of the literary historical fiction genre in the nineteenth century with Walter Scott in particular, a genre found useful in representing women’s issues of the Victorian era by Victorian and contemporary authors. The exegesis also briefly considers the appropriateness of the fiction genre (as opposed to creative nonfiction) in creating the lived experience in a fact-based work. The major part of the exegesis is a detailed, reflective analysis of the problem-solving process involved in writing the novel, structured by reference to Kate Grenville’s Searching for the Secret River – a work of metawriting that explains her creative process in researching and writing historical fiction based on fact.
Resumo:
Train delay is one of the most important indexes to evaluate the service quality of the railway. Because of the interactions of movement among trains, a delayed train may conflict with trains scheduled on other lines at junction area. Train that loses conflict may be forced to stop or slow down because of restrictive signals, which consequently leads to the loss of run-time and probably enlarges more delays. This paper proposes a time-saving train control method to recover delays as soon as possible. In the proposed method, golden section search is adopted to identify the optimal train speed at the expected time of restrictive signal aspect upgrades, which enables the train to depart from the conflicting area as soon as possible. A heuristic method is then developed to attain the advisory train speed profile assisting drivers in train control. Simulation study indicates that the proposed method enables the train to recover delays as soon as possible in case of disturbances at railway junctions, in comparison with the traditional maximum traction strategy and the green wave strategy.
Resumo:
Anybody who has attempted to publish some aspect of their work in an academic journal will know that it isn’t as easy as it may seem. The amount of preparation required of a manuscript can be quite daunting. Besides actually writing the manuscript, the authors are faced with a number of technical requirements. Each journal has their own formatting requirements, relating not only to section headings and text layout, but also to very small details such as placement of commas in reference lists. Then, if presenting data in the form of figures, this must be formatted so that it can be understood by the readership, and most journals still require that the data be in a format which can be read when printed in black-and-white. Most daunting (and important) of all, for the article to be scientifically valid it must be absolutely true in the representation of the work reported (i.e. all data must be shown unless a strong justification exists for removing data points), and this might cause angst in the mind of the authors when the results aren’t clear or possibly contradict the expected or desired result.
Resumo:
The experience of sexual desire in older age remains an aspect of the ageing experience about which little is known; much less understood. To address this gap in knowledge, the purpose of this hermeneutic interpretive study was to describe and understand how sexual desire is experienced in a sample of 11 purposively selected men and six women aged between 62 and 92 years. The study was based on audio-taped interviews with participants who were willing to discuss their experiences of sexual desire. The study was guided by the philosophy of Paul Ricoeur through the process of interview transcription to the interpretation of the experience of sexual desire in older age. Participants’ narratives were analysed for emergent themes using a twofold methodology inspired by Ricoeur. The narratives provided first-hand accounts of the experience of sexual desire in an ageing context. Findings revealed that participants identified as a sexual being regardless of age and availability of sexual partner. Findings also revealed that sexual selfhood was acknowledged through physiological response, that sexual desire could be influenced by socio-cultural factors and experienced within an ethical relational domain. Major themes explicated during the study included the experience of health and wellbeing, experience of sexual response, experience of sexual inadequacy, being socialised and re-entering the social scene.
Resumo:
Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, an important security attribute called key compromise impersonation (KCI) resilience has been completely ignored for the case of GKE protocols. Informally, a protocol is said to provide KCI resilience if the compromise of the long-term secret key of a protocol participant A does not allow the adversary to impersonate an honest participant B to A. In this paper, we argue that KCI resilience for GKE protocols is at least as important as it is for 2PKE protocols. Our first contribution is revised definitions of security for GKE protocols considering KCI attacks by both outsider and insider adversaries. We also give a new proof of security for an existing two-round GKE protocol under the revised security definitions assuming random oracles. We then show how to achieve insider KCIR in a generic way using a known compiler in the literature. As one may expect, this additional security assurance comes at the cost of an extra round of communication. Finally, we show that a few existing protocols are not secure against outsider KCI attacks. The attacks on these protocols illustrate the necessity of considering KCI resilience for GKE protocols.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Travel in passenger cars is a ubiquitous aspect of the daily activities of many people. During the 2009 influenza A (H1N1) pandemic a case of probable transmission during car travel was reported in Australia, to which spread via the airborne route may have contributed. However, there are no data to indicate the likely risks of such events, and how they may vary and be mitigated. To address this knowledge gap, we estimated the risk of airborne influenza transmission in two cars (1989 model and 2005 model) by employing ventilation measurements and a variation of the Wells-Riley model. Results suggested that infection risk can be reduced by not recirculating air; however, estimated risk ranged from 59 to 99.9% for a 90 min trip when air was recirculated in the newer vehicle. These results have implications for interrupting in-car transmission of other illnesses spread by the airborne route.
Resumo:
Partially grouted wider reinforced masonry wall, built predominantly with the use of face shell bedded hollow concrete blocks, is adopted extensively in the cyclonic areas due to its economy. Its out-of-plane response to lateral pressure loading is well definied; however its in-plane shear behaviour is less well understood, in particular it is unclear how the grouted reinforced cores affect the load paths within the wall. For the rational design of the walls, clarification is sought as to whether the wall acts as a composite of unreinforced panels surrounded by the reinforced cores or simply as a continuum embedded with reinforcement at wider spacing. This paper reports four full scale walls tested under in-place cyclic shear loading to provide some insight into the effect of the grout cores in altering the load paths within the wall. The global lateral load - lateral deflection hysteretic curves as well as the local responses of some critical zones of the shear walls are presented. It is shown that the aspect ratio of the unreinforced masonry panels surrounded by the reinforced grouted cores within the shear walls have profound effect in ascertaining the behaviour of the shear walls.
Resumo:
The lack of satisfactory consensus for characterizing the system intelligence and structured analytical decision models has inhibited the developers and practitioners to understand and configure optimum intelligent building systems in a fully informed manner. So far, little research has been conducted in this aspect. This research is designed to identify the key intelligent indicators, and develop analytical models for computing the system intelligence score of smart building system in the intelligent building. The integrated building management system (IBMS) was used as an illustrative example to present a framework. The models presented in this study applied the system intelligence theory, and the conceptual analytical framework. A total of 16 key intelligent indicators were first identified from a general survey. Then, two multi-criteria decision making (MCDM) approaches, the analytic hierarchy process (AHP) and analytic network process (ANP), were employed to develop the system intelligence analytical models. Top intelligence indicators of IBMS include: self-diagnostic of operation deviations; adaptive limiting control algorithm; and, year-round time schedule performance. The developed conceptual framework was then transformed to the practical model. The effectiveness of the practical model was evaluated by means of expert validation. The main contribution of this research is to promote understanding of the intelligent indicators, and to set the foundation for a systemic framework that provide developers and building stakeholders a consolidated inclusive tool for the system intelligence evaluation of the proposed components design configurations.
Resumo:
Buildings are one of the most significant infrastructures in modern societies. The construction and operation of modern buildings consume a considerable amount of energy and materials, therefore contribute significantly to the climate change process. In order to reduce the environmental impact of buildings, various green building rating tools have been developed. In this paper, energy uses of the building sector in Australia and over the world are first reviewed. This is then followed by discussions on the development and scopes of various green building rating tools, with a particular focus on the Green Star rating scheme developed in Australia. It is shown that Green Star has significant implications on almost every aspect of the design of HVAC systems, including the selection of air handling and distribution systems, fluid handling systems, refrigeration systems, heat rejection systems and building control systems.