248 resultados para Employé
Resumo:
Symmetric multi-processor (SMP) systems, or multiple-CPU servers, are suitable for implementing parallel algorithms because they employ dedicated communication devices to enhance the inter-processor communication bandwidth, so that a better performance can be obtained. However, the cost for a multiple-CPU server is high and therefore, the server is usually shared among many users. The work-load due to other users will certainly affect the performance of the parallel programs so it is desirable to derive a method to optimize parallel programs under different loading conditions. In this paper, we present a simple method, which can be applied in SPMD type parallel programs, to improve the speedup by controlling the number of threads within the programs.
Resumo:
Around the world, particularly in North America and Australia, urban sprawl combined with low density suburban development has caused serious accessibility and mobility problems, especially for those who do not own a motor vehicle or have access to public transportation services. Sustainable urban and transportation development is seen crucial in solving transportation disadvantage problems in urban settlements. However, current urban and transportation models have not been adequately addressed unsustainable urban transportation problems that transportation disadvantaged groups overwhelmingly encounter, and the negative impacts on the disadvantaged have not been effectively considered. Transportation disadvantaged is a multi-dimensional problem that combines demographic, spatial and transportation service dimensions. Nevertheless, most transportation models focusing on transportation disadvantage only employ demographic and transportation service dimensions and do not take spatial dimension into account. This paper aims to investigate the link between sustainable urban and transportation development and spatial dimension of the transportation disadvantage problem. The paper, for that purpose, provides a thorough review of the literature and identifies a set of urban, development and policy characteristics to define spatial dimension of the transportation disadvantage problem. This paper presents an overview of these urban, development and policy characteristics that have significant relationships with sustainable urban and transportation development and travel inability, which are also useful in determining transportation disadvantaged populations.
Resumo:
Adversity has the effect of eliciting talents, which, in prosperous circumstances, would have lain dormant. Horace - Roman lyric poet and satirist 65BC – 8 BC This quotation from Horace could well be the chorus to a medley of songs sung by people who face extraordinary adversity and have gained emotional resilience through music making. In this chapter we present three composition ventures that are stories or verses in a new song and whose chorus summarises the nature of the resilience factors present in the narratives. We are aware that words on a page like this can have the effect of filtering out the engaging nature of musical experience and reduce music to a critique or an evaluation of its aesthetic value. This disjuncture between language and the ephemeral, embodied experience is a problem for those who use these creative processes in therapeutic and salutogenic ways (Antonovsky, 1996) for public health. The notion of salutogenic health, put simply, delineates it from therapy in that the processes focus upon wellness rather than therapy. Whilst we include evidence from the fields of community music therapy (Pavlicevic, 2004; Leitschuh et al., 1991), neuroscience (Bittman et al., 2001) and community music (Bartleet et al., 2009) the framework for a salutogenic health outcome in community music is one which seeks to employ music practices and the qualities of music making that provide positive health benefit to communities –to enhance health and well being rather than the “treatment” of disorders. It is essentially a holistic and interdisciplinary study. Therapy and salutogenic health are not mutually exclusive as both depend upon the qualities of music experience to affect change. Collecting, analysing and presenting evidence of change in human behaviour that can be directly attributed to creative music making is a problem of evaluation.
Resumo:
Before making a security or privacy decision, Internet users should evaluate several security indicators in their browser, such as the use of HTTPS (indicated via the lock icon), the domain name of the site, and information from extended validation certificates. However, studies have shown that human subjects infrequently employ these indicators, relying on other indicators that can be spoofed and convey no cryptographic assurances. We identify four simple security indicators that accurately represent security properties of the connection and then examine 125 popular websites to determine if the sites' designs result in correctly displayed security indicators during login. In the vast majority of cases, at least some security indicators are absent or suboptimal. This suggests users are becoming habituated to ignoring recommended security indicators.
Resumo:
In this work, we investigate and compare the Maxwell–Stefan and Nernst–Planck equations for modeling multicomponent charge transport in liquid electrolytes. Specifically, we consider charge transport in the Li+/I−/I3−/ACN ternary electrolyte originally found in dye-sensitized solar cells. We employ molecular dynamics simulations to obtain the Maxwell–Stefan diffusivities for this electrolyte. These simulated diffusion coefficients are used in a multicomponent charge transport model based on the Maxwell– Stefan equations, and this is compared to a Nernst–Planck based model which employs binary diffusion coefficients sourced from the literature. We show that significant differences between the electrolyte concentrations at electrode interfaces, as predicted by the Maxwell–Stefan and Nernst–Planck models, can occur. We find that these differences are driven by a pressure term that appears in the Maxwell–Stefan equations. We also investigate what effects the Maxwell–Stefan diffusivities have on the simulated charge transport. By incorporating binary diffusivities found in the literature into the Maxwell–Stefan framework, we show that the simulated transient concentration profiles depend on the diffusivities; however, the simulated equilibrium profiles remain unaffected.
Resumo:
For quite some time, debate has raged about what the human race can and should do with its knowledge of genetics. We are now nearly 60 years removed from the work of Watson and Crick who determined the structure of deoxyribonucleic acid (DNA), yet our opinions as how best to employ scientific knowledge of the human genome, remain as diverse and polarised as ever. Human judgment is often shaped and coloured by popular media and culture, so it should come as no surprise that box office movies such as Gattaca (1997) continue to play a role in informing public opinion on genetics. In order to perform well at the box office, movies such as Gattaca take great liberty in sensationalising (and even distorting) the implications that may result from genetic screening and testing. If the public’s opinion on human genetics is strongly derived from the box office and popular media, then it is no wonder that the discourse on human genetics is couched in the polar parlances of future utopias or future dystopias. When legislating in an area like genetic discrimination in the workforce, we must be mindful of not overplaying the causal link between genetic predisposition towards a disability and an employee’s ability to perform the inherent requirements of their job. Genetic information is ultimately about people, it is not about genes. Genetic discrimination is ultimately about actions, it is not about the intrinsic value of genetic information.
Resumo:
Statistics presented in Australia Council reports such as Don’t Give Up Your Day Job (2003), and Artswork: A Report On Australians Working in the Arts 1 and 2 (1997, 2005), and in other studies on destinations for Performing Arts graduates, demonstrate the diversity of post-graduation pathways for our students, the prevalence of protean careers, and the challenges in developing a sense of professional identity in a context where a portfolio of work across performance making, producing, administration and teaching can make it difficult for young artists to establish career status and capital in conventional terms (cf. Dawn Bennett, “Academy and the Real World: Developing Realistic Notions of Career in the Performing Arts”, Arts & Humanities in Higher Education, 8.3, 2009). In this panel, academics from around Australia will consider the ways in which Drama, Theatre and Performance Studies as a discipline is deploying a variety of practical, professional and work-integrated teaching and learning activities – including performance-making projects, industry projects, industry placements and student-initiated projects – to connect students with the networks, industries and professional pathways that will support their progression into their career. The panellists include Bree Hadley (Queensland University of Technology), Meredith Rogers (La Trobe University), Janys Hayes (Woolongong University) and Teresa Izzard (Curtin University). The panelists will present insights into the activities they have found successful, and address a range of questions, including: How do we introduce students to performance-making and / or producing models they will be able to employ in their future practice, particularly in light of the increasingly limited funds, time and resources available to support students’ participation in full-scale productions under the stewardship of professional artists?; How and when do we introduce students to industry networks?; How do we cater for graduates who will work as performers, writers, directors or administrators in the non-subsidised sector, the subsidised sector, community arts and education?; How do we category cater for graduates who will go on to pursue their work in a practice-as-research context in a Higher Degree?; How do we assist graduates in developing a professional identity? How do we assist graduates in developing physical, professional and personal resilience?; How do we retain our connections with graduates as part of their life-long learning?; Do practices and processes need to differ for city or regionally based / theoretically or practically based degree programs?; How do our teaching and learning activities align with emergent policy and industrial frameworks such as the shift to the “Producer Model” in Performing Arts funding, or the new mentorship, project, production and enterprise development opportunities under the Australia Council for the Arts’ new Opportunities for Young and Emerging Artists policy framework?
Resumo:
Inspection aircraft equipped with cameras and other sensors are routinely used for asset location, inspection, monitoring and hazard identification of oil-gas pipelines, roads, bridges and power transmission grids. This paper is concerned with automated flight of fixed-wing inspection aircraft to track approximately linear infrastructure. We propose a guidance law approach that seeks to maintain aircraft trajectories with desirable position and orientation properties relative to the infrastructure under inspection. Furthermore, this paper also proposes the use of an adaptive maneuver selection approach, in which maneuver primitives are adaptively selected to improve the aircraft’s attitude behaviour. We employ an integrated design methodology particularly suited for an automated inspection aircraft. Simulation studies using full nonlinear semi-coupled six degree-of-freedom equations of motion are used to illustrate the effectiveness of the proposed guidance and adaptive maneuver selection approaches in realistic flight conditions. Experimental flight test results are given to demonstrate the performance of the design.
Resumo:
Autonomous underwater gliders are robust and widely-used ocean sampling platforms that are characterized by their endurance, and are one of the best approaches to gather subsurface data at the appropriate spatial resolution to advance our knowledge of the ocean environment. Gliders generally do not employ sophisticated sensors for underwater localization, but instead dead-reckon between set waypoints. Thus, these vehicles are subject to large positional errors between prescribed and actual surfacing locations. Here, we investigate the implementation of a large-scale, regional ocean model into the trajectory design for autonomous gliders to improve their navigational accuracy. We compute the dead-reckoning error for our Slocum gliders, and compare this to the average positional error recorded from multiple deployments conducted over the past year. We then compare trajectory plans computed on-board the vehicle during recent deployments to our prediction-based trajectory plans for 140 surfacing occurrences.
Resumo:
In recent years, ocean scientists have started to employ many new forms of technology as integral pieces in oceanographic data collection for the study and prediction of complex and dynamic ocean phenomena. One area of technological advancement in ocean sampling if the use of Autonomous Underwater Vehicles (AUVs) as mobile sensor plat- forms. Currently, most AUV deployments execute a lawnmower- type pattern or repeated transects for surveys and sampling missions. An advantage of these missions is that the regularity of the trajectory design generally makes it easier to extract the exact path of the vehicle via post-processing. However, if the deployment region for the pattern is poorly selected, the AUV can entirely miss collecting data during an event of specific interest. Here, we consider an innovative technology toolchain to assist in determining the deployment location and executed paths for AUVs to maximize scientific information gain about dynamically evolving ocean phenomena. In particular, we provide an assessment of computed paths based on ocean model predictions designed to put AUVs in the right place at the right time to gather data related to the understanding of algal and phytoplankton blooms.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Software transactional memory has the potential to greatly simplify development of concurrent software, by supporting safe composition of concurrent shared-state abstractions. However, STM semantics are defined in terms of low-level reads and writes on individual memory locations, so implementations are unable to take advantage of the properties of user-defined abstractions. Consequently, the performance of transactions over some structures can be disappointing. ----- ----- We present Modular Transactional Memory, our framework which allows programmers to extend STM with concurrency control algorithms tailored to the data structures they use in concurrent programs. We describe our implementation in Concurrent Haskell, and two example structures: a finite map which allows concurrent transactions to operate on disjoint sets of keys, and a non-deterministic channel which supports concurrent sources and sinks. ----- ----- Our approach is based on previous work by others on boosted and open-nested transactions, with one significant development: transactions are given types which denote the concurrency control algorithms they employ. Typed transactions offer a higher level of assurance for programmers reusing transactional code, and allow more flexible abstract concurrency control.
A Modified inverse integer Cholesky decorrelation method and the performance on ambiguity resolution
Resumo:
One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.
Resumo:
This exhibition engages with one of the key issues facing the fashion textiles industry in terms of future sustainability: that of the well being of fashion industry workers in Australia and New Zealand (people). This collection formed the basis of my honours dissertation (completed in New Zealand in 2008) which examines the contribution that design can make to sustainable manufacturing; particularly design for local production and consumption. An important aspect this work is the discussion of source, the work suggests that the made in China syndrome (in reference to the current state of over-consumerism in Australia and New Zealand) could be bought to a close through design to minimize waste and maximize opportunity for ‘people’: in this case both garment workers and the SMEs that employ them. The garments reflect the possibilities of focusing on a local approach that could be put into practice by a framework of SMEs that already exist. In addition the design process is highly transferrable and could be put into practice almost anywhere with minimal set up costs and a design ethos that progresses at the same pace as the skills of workers. This collection is a physical and conceptual embodiment of a source local/make local/sell local approach. The collection is an example of design that demonstrates that this is not an unrealistic ideal and is in fact possible through the development of a sustainable industry, in the sense of people, profit and planet, through adoption of a design process model that stops the waste at the source, by making better use of the raw materials and labour involved in making fashion garments. Although the focus of this research appears to centre on people and profit, this kind of source local/make local/sell local approach also has great benefits in terms of environmental sustainability.
Resumo:
With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.