869 resultados para artifacts


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the formal recognition of practice-led research in the 1990s, many higher research degree candidates in art, design and media have submitted creative works along with an accompanying written document or ‘exegesis’ for examination. Various models for the exegesis have been proposed in university guidelines and academic texts during the past decade, and students and supervisors have experimented with its contents and structure. With a substantial number of exegeses submitted and archived, it has now become possible to move beyond proposition to empirical analysis. In this article we present the findings of a content analysis of a large, local sample of submitted exegeses. We identify the emergence of a persistent pattern in the types of content included as well as overall structure. Besides an introduction and conclusion, this pattern includes three main parts, which can be summarized as situating concepts (conceptual definitions and theories); precedents of practice (traditions and exemplars in the field); and researcher’s creative practice (the creative process, the artifacts produced and their value as research). We argue that this model combines earlier approaches to the exegesis, which oscillated between academic objectivity, by providing a contextual framework for the practice, and personal reflexivity, by providing commentary on the creative practice. But this model is more than simply a hybrid: it provides a dual orientation, which allows the researcher to both situate their creative practice within a trajectory of research and do justice to its personally invested poetics. By performing the important function of connecting the practice and creative work to a wider emergent field, the model helps to support claims for a research contribution to the field. We call it a connective model of exegesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the study of passenger experiences and how passengers interact with services, technology and processes at an airport. As part of our research, we have followed people through the airport from check-in to security and from security to boarding. Data was collected by approaching passengers in the departures concourse of the airport and asking for their consent to be videotaped. Data was collected and coded and the analysis focused on both discretionary and process related passenger activities. Our findings show the interdependence between activities and passenger experiences. Within all activities, passengers interact with processes, domain dependent technology, services, personnel and artifacts. These levels of interaction impact on passenger experiences and are interdependent. The emerging taxonomy of activities consists of (i) ownership related activities, (ii) group activities, (iii) individual activities (such as activities at the domain interfaces) and (iv) concurrent activities. This classification is contributing to the development of descriptive models of passenger experiences and how these activities affect the facilitation and design of future airports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transitions represents a selection of works presented by 3rd year QUT design students undertaken as part of their coursework entitled "Environments in Transitions". The work focuses upon the migration of ideas and aesthetics from East to West, with particular consideration for the influence of Japanese woodblock prints, Wabi-Sabi themes and decorative motifs upon European design during the turn of the 19th century. The works exhibited included design artifacts of various themes, and associated print work which informed the design process. A small exhibition publication accompanied the exhibition and talk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article focuses on how teachers worked to build a meaningful curriculum around changes to a neighborhood and school grounds in a precinct listed for urban renewal. Drawing on a long-term relationship with the principal and one teacher, the researchers planned and designed a collaborative project to involve children as active participants in the redevelopment process, negotiating and redesigning an area between the preschool and the school. The research investigated spatial literacies, that is, ways of thinking about and representing the production of spaces, and critical literacies, in this instance how young people might have a say in remaking part of their school grounds. Data included videotapes of key events, interviews, and an archive of the elementary students' artifacts experimenting with spatial literacies. The project builds on the insights of community members and researchers working for social justice in high-poverty areas internationally that indicate the importance of education, local action, family, and youth involvement in building sustainable and equitable communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This workshop focuses upon research about the qualities of community in music and of music in community facilitated by technologically supported relationships. Generative media systems present an opportunity for users to leverage computational systems to form new relationships through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences for users with little formal musical or artistic expertise. This workshop examines the relational affordances of these systems evidenced by selected field data drawn from the Network Jamming Project. These generative performance systems enable access to unique ensembles with very little musical knowledge or skill and offer the possibility of interactive relationships with artists and musical knowledge through collaborative performance. In this workshop we will focus on data that highlights how these simulated experiences might lead to understandings that may be of social benefit. Conference participants will be invited to jam in real time using virtual interfaces and to evaluate purposively selected video artifacts that demonstrate different kinds of interactive relationship with artists, peers, and community and that enrich the sense of expressive self. Theoretical insights about meaningful engagement drawn from the longitudinal and cross cultural experiences will underpin the discussion and practical presentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several studies have developed metrics for software quality attributes of object-oriented designs such as reusability and functionality. However, metrics which measure the quality attribute of information security have received little attention. Moreover, existing security metrics measure either the system from a high level (i.e. the whole system’s level) or from a low level (i.e. the program code’s level). These approaches make it hard and expensive to discover and fix vulnerabilities caused by software design errors. In this work, we focus on the design of an object-oriented application and define a number of information security metrics derivable from a program’s design artifacts. These metrics allow software designers to discover and fix security vulnerabilities at an early stage, and help compare the potential security of various alternative designs. In particular, we present security metrics based on composition, coupling, extensibility, inheritance, and the design size of a given object-oriented, multi-class program from the point of view of potential information flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Being a novice researcher undertaking research interviews with young children requires understandings of the interview process. By investigating the interaction between a novice researcher undertaking her first interview and a child participant, we attend to theoretical principles, such as the competence of young children as informants, and highlight practical matters when interviewing young children. A conversation analysis approach examines the talk preceding and following a sticker task. By highlighting the conversational features of a research interview, researchers can better understand the co-constructed nature of the interview. This paper provides insights into how to prepare for the interview and manage the interview context to recognize the active participation of child participants, and the value of artifacts to promote interaction. These insights make more transparent the interactional process of a research interview and become part of the researcher’s collection of devices to manage the conduct of research interviews.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The transformation of urban spaces that occurs once darkness falls is simultaneously exhilarating and menacing, and over the past 20 months we have investigated the potential for mobile technology to help users manage their personal safety concerns in the city at night. Our findings subverted commonly held notions of vulnerability, with the threat of violence felt equally by men and women. But while women felt protected because of their mobile technology, men dismissed it as digital Man Mace. We addressed this macho design challenge by studying remote engineers in outback Australia to inspire our personal safety design prototype MATE (Mobile Artifact for Taming Environments).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The affects associated with culture, the values inherent in cultures and the identification of cultural assumptions are popular topics in recent management and Information Systems (IS) research. The main focus in relevant IS research over the years, has been on the comparison of cultural artifacts in different cultural settings. Despite these studies we need to ask whether there is a general approach to how culture can be researched in a rigorous manner? What are the issues that arise in cross- cultural research that have a bearing on decisions about a suitable research approach? What are the most appropriate methodologies to be used in cross-cultural research? Which is more appropriate, a qualitative, a quantitative or a mixed- method research approach? This paper will discuss important considerations in the process of deciding on the best research approach for cross-cultural projects. A case study will be then be reported as an example revealing the merits of integrating qualitative and quantitative approaches followed by a thorough discussion on the issues which may arise during this process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In their quest for resources to support children’s early literacy learning and development, parents encounter and traverse different spaces in which discourses and artifacts are produced and circulated. This paper uses conceptual tools from the field of geosemiotics to examine some commercial spaces designed for parents and children which foreground preschool learning and development. Drawing on data generated in a wider study I discuss some of the ways in which the material and virtual commercial spaces of a transnational shopping mall company and an educational toy company operate as sites of encounter between discourses and artifacts about children’s early learning and parents of preschoolers. I consider how companies connect with and ‘situate’ people as parents and customers, and then offer pathways designed for parents to follow as they attempt to meet their very young children’s learning and development needs. I argue that these pathways are both material and ideological, and that are increasingly tending to lead parents to the online commercial spaces of the world wide web. I show how companies are using the online environment and hybrid offline and online spaces and flows to reinforce an image of themselves as authoritative brokers of childhood resources for parents that is highly valuable in a policy climate which foregrounds lifelong learning and school readiness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Process modeling is a complex organizational task that requires many iterations and communication between the business analysts and the domain specialists. The challenge of process modeling is exacerbated, when the process of modeling has to be performed in a cross-organizational, distributed environment. In this paper we suggest a 3D environment for collaborative process modeling, using Virtual World technology. Design/methodology/approach We suggest a new collaborative process modeling approach based on Virtual World technology. We describe the design of an innovative prototype collaborative process modeling approach, implemented as a 3D BPMN modeling environment in Second Life. We use a case study to evaluate the suggested approach. Findings Based on our case study application, we show that our approach increases user empowerment and adds significantly to the collaboration and consensual development of process models even when the relevant stakeholders are geographically dispersed. Research limitations implications – We present design work and a case study. More research is needed to more thoroughly evaluate the presented approach in a variety of real-life process modeling settings. Practical implications Our research outcomes as design artifacts are directly available and applicable by business process management professionals and can be used by business, system and process analysts in real-world practice. Originality/value Our research is the first reported attempt to develop a process modeling approach on the basis of virtual world technology. We describe a novel and innovative 3D BPMN modeling environment in Second Life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Characteristics of surveillance video generally include low resolution and poor quality due to environmental, storage and processing limitations. It is extremely difficult for computers and human operators to identify individuals from these videos. To overcome this problem, super-resolution can be used in conjunction with an automated face recognition system to enhance the spatial resolution of video frames containing the subject and narrow down the number of manual verifications performed by the human operator by presenting a list of most likely candidates from the database. As the super-resolution reconstruction process is ill-posed, visual artifacts are often generated as a result. These artifacts can be visually distracting to humans and/or affect machine recognition algorithms. While it is intuitive that higher resolution should lead to improved recognition accuracy, the effects of super-resolution and such artifacts on face recognition performance have not been systematically studied. This paper aims to address this gap while illustrating that super-resolution allows more accurate identification of individuals from low-resolution surveillance footage. The proposed optical flow-based super-resolution method is benchmarked against Baker et al.’s hallucination and Schultz et al.’s super-resolution techniques on images from the Terrascope and XM2VTS databases. Ground truth and interpolated images were also tested to provide a baseline for comparison. Results show that a suitable super-resolution system can improve the discriminability of surveillance video and enhance face recognition accuracy. The experiments also show that Schultz et al.’s method fails when dealing surveillance footage due to its assumption of rigid objects in the scene. The hallucination and optical flow-based methods performed comparably, with the optical flow-based method producing less visually distracting artifacts that interfered with human recognition.