973 resultados para Multi-View Rendering
Resumo:
Accumulating evidence suggests that Team-member exchange (TMX) influences employee work attitudes and behaviours separately from the effects of leader-member exchange (LMX). In particular, little is known of the effect of LMX differentiation (in-group versus out-group) as a process of social exhange that can, in turn, affect TMX quality. To explore this phenomenon, this chapter presents a multi-level model of TMX in organizations, which incorporates LMX differentiation, team identification, team member affect at the individual level, and fairness of LMX differentiation and affective climate at the group-level. We conclude with a discussion of the implications of our model for theory, research, and practice.
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
Many images consist of two or more 'phases', where a phase is a collection of homogeneous zones. For example, the phases may represent the presence of different sulphides in an ore sample. Frequently, these phases exhibit very little structure, though all connected components of a given phase may be similar in some sense. As a consequence, random set models are commonly used to model such images. The Boolean model and models derived from the Boolean model are often chosen. An alternative approach to modelling such images is to use the excursion sets of random fields to model each phase. In this paper, the properties of excursion sets will be firstly discussed in terms of modelling binary images. Ways of extending these models to multi-phase images will then be explored. A desirable feature of any model is to be able to fit it to data reasonably well. Different methods for fitting random set models based on excursion sets will be presented and some of the difficulties with these methods will be discussed.
Resumo:
“Closing the gap in curriculum development leadership” is a Carrick-funded University of Queensland project which is designed to address two related gaps in current knowledge and in existing professional development programs for academic staff. The first gap is in our knowledge of curriculum and pedagogical issues as they arise in relation to multi-year sequences of study, such as majors in generalist degrees, or core programs in more structured degrees. While there is considerable knowledge of curriculum and pedagogy at the course or individual unit of study level (e.g. Philosophy I), there is very little properly conceptualised, empirically informed knowledge about student learning (and teaching) over, say, a three-year major sequence in a traditional Arts or Sciences subject. The Carrick-funded project aims to (begin to) fill this gap through bottom-up curriculum development projects across the range of UQ’s offerings. The second gap is in our professional development programs and, indeed, in our recognition and support for the people who are in charge of such multi-year sequences of study. The major convener or program coordinator is not as well supported, in Australian and overseas professional development programs, as the lecturer in charge of a single course (or unit of study). Nor is her work likely to be taken account of in workload calculations or for the purposes of promotion and career advancement more generally. The Carrick-funded project aims to fill this gap by developing, in consultation with crucial stakeholders, amendments to existing university policies and practices. The attached documents provide a useful introduction to the project. For more information, please contact Fred D’Agostino at f.dagostino@uq.edu.au.
Resumo:
The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open access schemes. The method starts with a candidate pool of feasible expansion plans. Consequent selection of the best candidates is carried out through a MOOP approach, of which multiple objectives are tackled simultaneously, aiming at integrating the market operation and planning as one unified process in context of deregulated system. Human knowledge has been applied in both stages to ensure the selection with practical engineering and management concerns. The expansion plan from MOOP is assessed by reliability criteria before it is finalized. The proposed method has been tested with the IEEE 14-bus system and relevant analyses and discussions have been presented.
Resumo:
This paper reviews a wide range of tools for comprehensive sustainability assessments at whole tourism destinations, covering socio-cultural, economic and environmental issues. It considers their strengths, weaknesses and site specific applicability. It is intended to facilitate their selection (and combination where necessary). Tools covered include Sustainability Indicators, Environmental Impact Assessment, Life Cycle Assessment, Environmental Audits, Ecological Footprints, Multi-Criteria Analysis and Adaptive Environmental Assessment. Guidelines for evaluating their suitability for specific sites and situations are given as well as examples of their use.
Resumo:
A major challenge in successfully implementing transit-oriented development (TOD) is having a robust process that ensures effective appraisal, initiation and delivery of multi-stakeholder TOD projects. A step-by step project development process can assist in the methodic design, evaluation, and initiation of TOD projects. Successful TOD requires attention to transit, mixed-use development and public space. Brisbane, Australia provides a case-study where recent planning policies and infrastructure documents have laid a foundation for TOD, but where barriers lie in precinct level planning and project implementation. In this context and perhaps in others, the research effort needs to shift toward identification of appropriate project processes and strategies. This paper presents the outcomes of research conducted to date. Drawing on the mainstream approach to project development and financial evaluation for property projects, key steps for potential use in successful delivery of TOD projects have been identified, including: establish the framework; location selection; precinct context review; preliminary precinct design; the initial financial viability study; the decision stage; establishment of project structure; land acquisition; development application; and project delivery. The appropriateness of this mainstream development and appraisal process will be tested through stakeholder research, and the proposed process will then be refined for adoption in TOD projects. It is suggested that the criteria for successful TOD should be broadened beyond financial concerns in order to deliver public sector support for project initiation.
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.
Resumo:
Almost all clinical magnetic resonance imaging systems are based on circular cross-section magnets. Recent advances in elliptical cross-section RF probe and gradient coil hardware raise the question of the possibility of using elliptical cross-section magnet systems, This paper presents a methodology for calculating rapidly the magnetic fields generated by a multi-turn coil of elliptical cross-section and incorporates this in a stochastic optimization method for magnet design, An open magnet system of elliptical cross-section is designed that both reduces the claustrophobia for the patients and allows ready access by attending physicians, The magnet system is optimized for paediatric use, The coil geometry produced by the optimization method has several novel features.
Resumo:
This review explores the influence to suicide in print and electronic media, and considers both real and fictional deaths. The conclusion appears inescapable that reports about celebrities which are multi-modal, repeated, explicit, front page, glorify the suicide, and describe the method lead to an increase in deaths from suicide, particularly in the region in which reports are published. The paper argues that even if there was multi-national agreement to international guidelines, media will continue to report suicide when it is considered to be a matter of public interest. What appears crucial is a collaborative approach between professionals and the media to promote a negative attitude toward suicide without increasing stigma toward those with mental health problems.
Resumo:
Bioelectrical impedance analysis (BIA) offers the potential for a simple, portable and relatively inexpensive technique for the in vivo measurement of total body water (TBW). The potential of BIA as a technique of body composition analysis is even greater when one considers that body water can be used as a surrogate measure of lean body mass. However, BIA has not found universal acceptance even with the introduction of multi-frequency BIA (MFBIA) which, potentially, may improve the predictive accuracy of the measurement. There are a number of reasons for this lack of acceptance, although perhaps the major reason is that no single algorithm has been developed which can be applied to all subject groups. This may be due, in part, to the commonly used wrist-to-ankle protocol which is not indicated by the basic theory of bioimpedance, where the body is considered as five interconnecting cylinders. Several workers have suggested the use of segmental BIA measurements to provide a protocol more in keeping with basic theory. However, there are other difficulties associated with the application of BIA, such as effects of hydration and ion status, posture and fluid distribution. A further putative advantage of MFBIA is the independent assessment not only of TBW but also of the extracellular fluid volume (ECW), hence heralding the possibility of,being able to assess the fluid distribution between these compartments. Results of studies in this area have been, to date, mixed. Whereas strong relationships of impedance values at low frequencies with ECW, and at high frequencies with TBW, have been reported, changes in impedance are not always well correlated with changes in the size of the fluid compartments (assessed by alternative and more direct means) in pathological conditions. Furthermore, the theoretical advantages of Cole-Cole modelling over selected frequency prediction have not always been apparent. This review will consider the principles, methodology and applications of BIA. The principles and methodology will,be considered in relation to the basic theory of BIA and difficulties experienced in its application. The relative merits of single and multiple frequency BIA will be addressed, with particular attention to the latter's role in the assessment of compartmental fluid volumes. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper describes U2DE, a finite-volume code that numerically solves the Euler equations. The code was used to perform multi-dimensional simulations of the gradual opening of a primary diaphragm in a shock tube. From the simulations, the speed of the developing shock wave was recorded and compared with other estimates. The ability of U2DE to compute shock speed was confirmed by comparing numerical results with the analytic solution for an ideal shock tube. For high initial pressure ratios across the diaphragm, previous experiments have shown that the measured shock speed can exceed the shock speed predicted by one-dimensional models. The shock speeds computed with the present multi-dimensional simulation were higher than those estimated by previous one-dimensional models and, thus, were closer to the experimental measurements. This indicates that multi-dimensional flow effects were partly responsible for the relatively high shock speeds measured in the experiments.
Resumo:
This study tested hypotheses that locus of causality attributions for the academic performance of others are influenced by whether the other is a specific individual, or a typical other, and whether the other is similar or dissimilar to self. The research was carried out in two studies. Study 1 entailed development of two scales to measure perceptions of interpersonal similarity: 254 Australian undergraduates rated their similarity to either a specific other or to typical other students. In Study 2, 332 subjects completed one of the 16-item scales developed in Study 1, along with Rosenberg's self-esteem scale, and self-attribution and other-attribution versions of the Multidimensional Multi-attribution Causation Scale (MMCS). Results showed that attributions for the academic performance of others were strongly affected by whether the other was perceived to be similar or dissimilar to self, especially when the other was a specific individual. In particular, causal attributions for similar specific others were more favourable than attributions for self.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.