988 resultados para component environments
Resumo:
In this study, we explore motivation in collocated and virtual project teams. The literature on motivation in a project set.,ting reveals that motivation is closely linked to team performance. Based on this literature, we propose a set., of variables related to the three dimensions of ‘Nature of work’, ‘Rewards’, and ‘Communication’. Thirteen original variables in a sample size of 66 collocated and 66 virtual respondents are investigated using one tail t test and principal component analysis. We find that there are minimal differences between the two groups with respect to the above mentioned three dimensions. (p= .06; t=1.71). Further, a principal component analysis of the combined sample of collocated and virtual project environments reveals two factors- ‘Internal Motivating Factor’ related to work and work environment, and ‘External Motivating Factor’ related to the financial and non-financial rewards that explain 59.8% of the variance and comprehensively characterize motivation in collocated and virtual project environments. A ‘sense check’ of our interpretation of the results shows conformity with the theory and existing practice of project organization
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.
Resumo:
This paper presents Multi-Step A* (MSA*), a search algorithm based on A* for multi-objective 4D vehicle motion planning (three spatial and one time dimension). The research is principally motivated by the need for offline and online motion planning for autonomous Unmanned Aerial Vehicles (UAVs). For UAVs operating in large, dynamic and uncertain 4D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles and the rules of the air. It is shown that MSA* finds a cost optimal solution using variable length, angle and velocity trajectory segments. These segments are approximated with a grid based cell sequence that provides an inherent tolerance to uncertainty. Computational efficiency is achieved by using variable successor operators to create a multi-resolution, memory efficient lattice sampling structure. Simulation studies on the UAV flight planning problem show that MSA* meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of vector neighbourhood based A*.
Resumo:
This paper presents a deterministic modelling approach to predict diffraction loss for an innovative Multi-User-Single-Antenna (MUSA) MIMO technology, proposed for rural Australian environments. In order to calculate diffraction loss, six receivers have been considered around an access point in a selected rural environment. Generated terrain profiles for six receivers are presented in this paper. Simulation results using classical diffraction models and diffraction theory are also presented by accounting the rural Australian terrain data. Results show that in an area of 900 m by 900 m surrounding the receivers, path loss due to diffraction can range between 5 dB and 35 dB. Diffraction loss maps can contribute to determine the optimal location for receivers of MUSA-MIMO systems in rural areas.
Resumo:
Probabilistic topic models have recently been used for activity analysis in video processing, due to their strong capacity to model both local activities and interactions in crowded scenes. In those applications, a video sequence is divided into a collection of uniform non-overlaping video clips, and the high dimensional continuous inputs are quantized into a bag of discrete visual words. The hard division of video clips, and hard assignment of visual words leads to problems when an activity is split over multiple clips, or the most appropriate visual word for quantization is unclear. In this paper, we propose a novel algorithm, which makes use of a soft histogram technique to compensate for the loss of information in the quantization process; and a soft cut technique in the temporal domain to overcome problems caused by separating an activity into two video clips. In the detection process, we also apply a soft decision strategy to detect unusual events.We show that the proposed soft decision approach outperforms its hard decision counterpart in both local and global activity modelling.
Resumo:
Medical industries have brought Information Technology (IT) in their systems for both patients and medical staffs due to the numerous benefits of IT we experience at presently. Moreover, the Mobile healthcare (M-health) system has been developed as the first step of Ubiquitous Health Environment (UHE). With the mobility and multi-functions, M-health system will be able to provide more efficient and various services for both doctors and patients. Due to the invisible feature of mobile signals, hackers have easier access to hospital networks than wired network systems. This may result in several security incidents unless security protocols are well implemented. In this paper, user authentication and authorization procedures will applied as a featured component at each level of M-health systems inthe hospital environment. Accordingly, M-health system in the hospital will meet the optimal requirements as a countermeasure to its vulnerabilities.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
Long term exposure to vehicle emissions has been associated with harmful health effects. Children are amongst the most susceptible group and schools represent an environment where they can experience significant exposure to vehicle emissions. However, there are limited studies on children’s exposure to vehicle emissions in schools. The aim of this study was to quantify the concentration of organic aerosol and in particular, vehicle emissions that children are exposed to during school hours. Therefore an Aerodyne compact time-of-flight aerosol mass spectrometer (TOF-AMS) was deployed at five urban schools in Brisbane, Australia. The TOF-AMS enabled the chemical composition of the non- refractory (NR-PM1) to be analysed with a high temporal resolution to assess the concentration of vehicle emissions and other organic aerosols during school hours. At each school the organic fraction comprised the majority of NR-PM1 with secondary organic aerosols as the main constitute. At two of the schools, a significant source of the organic aerosol (OA) was slightly aged vehicle emissions from nearby highways. More aged and oxidised OA was observed at the other three schools, which also recorded strong biomass burning influences. Primary emissions were found to dominate the OA at only one school which had an O:C ratio of 0.17, due to fuel powered gardening equipment used near the TOF-AMS. The diurnal cycle of OA concentration varied between schools and was found to be at a minimum during school hours. The major organic component that school children were exposed to during school hours was secondary OA. Peak exposure of school children to HOA occurred during school drop off and pick up times. Unless a school is located near major roads, children are exposed predominately to regional secondary OA as opposed to local emissions during schools hours in urban environments.
Resumo:
Government action is essential to increase the healthiness of food environments and reduce obesity, diet-related non-communicable diseases (NCDs), and their related inequalities. This paper proposes a monitoring framework to assess government policies and actions for creating healthy food environments. Recommendations from relevant authoritative organizations and expert advisory groups for reducing obesity and NCDs were examined, and pertinent components were incorporated into a comprehensive framework for monitoring government policies and actions. A Government Healthy Food Environment Policy Index (Food-EPI) was developed, which comprises a ‘policy’ component with seven domains on specific aspects of food environments, and an ‘infrastructure support’ component with seven domains to strengthen systems to prevent obesity and NCDs. These were revised through a week-long consultation process with international experts. Examples of good practice statements are proposed within each domain, and these will evolve into benchmarks established by governments at the forefront of creating and implementing food policies for good health. A rating process is proposed to assess a government's level of policy implementation towards good practice. The Food-EPI will be pre-tested and piloted in countries of varying size and income levels. The benchmarking of government policy implementation has the potential to catalyse greater action to reduce obesity and NCDs.
Resumo:
For users of germplasm collections, the purpose of measuring characterization and evaluation descriptors, and subsequently using statistical methodology to summarize the data, is not only to interpret the relationships between the descriptors, but also to characterize the differences and similarities between accessions in relation to their phenotypic variability for each of the measured descriptors. The set of descriptors for the accessions of most germplasm collections consists of both numerical and categorical descriptors. This poses problems for a combined analysis of all descriptors because few statistical techniques deal with mixtures of measurement types. In this article, nonlinear principal component analysis was used to analyze the descriptors of the accessions in the Australian groundnut collection. It was demonstrated that the nonlinear variant of ordinary principal component analysis is an appropriate analytical tool because subspecies and botanical varieties could be identified on the basis of the analysis and characterized in terms of all descriptors. Moreover, outlying accessions could be easily spotted and their characteristics established. The statistical results and their interpretations provide users with a more efficient way to identify accessions of potential relevance for their plant improvement programs and encourage and improve the usefulness and utilization of germplasm collections.
Empirical vehicle-to-vehicle pathloss modeling in highway, suburban and urban environments at 5.8GHz
Resumo:
In this paper, we present a pathloss characterization for vehicle-to-vehicle (V2V) communications based on empirical data collected from extensive measurement campaign performed under line-of-sight (LOS), non-line-of-sight (NLOS) and varying traffic densities. The experiment was conducted in three different V2V propagation environments: highway, suburban and urban at 5.8GHz. We developed pathloss models for each of the three different V2V environments considered. Based on a log-distance power law model, the values for the pathloss exponent and the standard deviation of shadowing were reported. The average pathloss exponent ranges from 1.77 for highway, 1.68 for the urban to 1.53 for the suburban environment. The reported results can contribute to vehicular network (VANET) simulators and can be used by system designers to develop, evaluate and validate new protocols and system designs under realistic propagation conditions.
Resumo:
This report presents the final deliverable from the project titled Conceptual and statistical framework for a water quality component of an integrated report card’ funded by the Marine and Tropical Sciences Research Facility (MTSRF; Project 3.7.7). The key management driver of this, and a number of other MTSRF projects concerned with indicator development, is the requirement for state and federal government authorities and other stakeholders to provide robust assessments of the present ‘state’ or ‘health’ of regional ecosystems in the Great Barrier Reef (GBR) catchments and adjacent marine waters. An integrated report card format, that encompasses both biophysical and socioeconomic factors, is an appropriate framework through which to deliver these assessments and meet a variety of reporting requirements. It is now well recognised that a ‘report card’ format for environmental reporting is very effective for community and stakeholder communication and engagement, and can be a key driver in galvanising community and political commitment and action. Although a report card it needs to be understandable by all levels of the community, it also needs to be underpinned by sound, quality-assured science. In this regard this project was to develop approaches to address the statistical issues that arise from amalgamation or integration of sets of discrete indicators into a final score or assessment of the state of the system. In brief, the two main issues are (1) selecting, measuring and interpreting specific indicators that vary both in space and time, and (2) integrating a range of indicators in such a way as to provide a succinct but robust overview of the state of the system. Although there is considerable research and knowledge of the use of indicators to inform the management of ecological, social and economic systems, methods on how to best to integrate multiple disparate indicators remain poorly developed. Therefore the objective of this project was to (i) focus on statistical approaches aimed at ensuring that estimates of individual indicators are as robust as possible, and (ii) present methods that can be used to report on the overall state of the system by integrating estimates of individual indicators. It was agreed at the outset, that this project was to focus on developing methods for a water quality report card. This was driven largely by the requirements of Reef Water Quality Protection Plan (RWQPP) and led to strong partner engagement with the Reef Water Quality Partnership.
Resumo:
BACKGROUND Experimental learning, traditionally conducted in on-campus laboratory venues, is the cornerstone of science and engineering education. In order to ensure that engineering graduates are exposed to ‘real-world’ situations and attain the necessary professional skill-sets, as mandated by course accreditation bodies such as Engineers Australia, face-to-face laboratory experimentation with real equipment has been an integral component of traditional engineering education. The online delivery of engineering coursework endeavours to mimic this with remote and simulated laboratory experimentation. To satisfy student and accreditation requirements, the common practice has been to offer equivalent remote and/or simulated laboratory experiments in lieu of the ones delivered, face-to face, on campus. The current implementations of both remote and simulated laboratories tend to be specified with a focus on technical characteristics, instead of pedagogical requirements. This work attempts to redress this situation by developing a framework for the investigation of the suitability of different experimental educational environments to deliver quality teaching and learning. PURPOSE For the tertiary education sector involved with technical or scientific training, a research framework capable of assessing the affordances of laboratory venues is an important aid during the planning, designing and evaluating stages of face-to-face and online (or cyber) environments that facilitate student experimentation. Providing quality experimental learning venues has been identified as one of the distance-education providers’ greatest challenges. DESIGN/METHOD The investigation draws on the expertise of staff at three Australian universities: Swinburne University of Technology (SUT), Curtin University (Curtin) and Queensland University of Technology (QUT). The aim was to analyse video recorded data, in order to identify the occurrences of kikan-shido (a Japanese term meaning ‘between desks instruction’ and over-the-shoulder learning and teaching (OTST/L) events, thereby ascertaining the pedagogical affordances in face-to-face laboratories. RESULTS These will be disseminated at a Master Class presentation at this conference. DISCUSSION Kikan-shido occurrences did reflect on the affordances of the venue. Unlike with other data collection methods, video recorded data and its analysis is repeatable. Participant bias is minimised or even eradicated and researcher bias tempered by enabling re-coding by others. CONCLUSIONS Framework facilitates the identification of experiential face-to-face learning venue affordances. Investigation will continue with on-line venues.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.