985 resultados para Detail (Architektur)
Resumo:
The purpose of this study is to contribute to the cross-disciplinary body of literature of identity and organisational culture. This study empirically investigated the Hatch and Schultz (2002) Organisational Identity Dynamics (OID) model to look at linkages between identity, image, and organisational culture. This study used processes defined in the OID model as a theoretical frame by which to understand the relationships between actual and espoused identity manifestations across visual identity, corporate identity, and organisational identity. The linking processes of impressing, mirroring, reflecting, and expressing were discussed at three unique levels in the organisation. The overarching research question of How does the organisational identity dynamics process manifest itself in practice at different levels within an organisation? was used as a means of providing empirical understanding to the previously theoretical OID model. Case study analysis was utilised to provide exploratory data across the organisational groups of: Level A - Senior Marketing and Corporate Communications Management, Level B - Marketing and Corporate Communications Staff, and Level C - Non-Marketing Managers and Employees. Data was collected via 15 in-depth interviews with documentary analysis used as a supporting mechanism to provide triangulation in analysis. Data was analysed against the impressing, mirroring, reflecting, and expressing constructs with specific criteria developed from literature to provide a detailed analysis of each process. Conclusions revealed marked differences in the ways in which OID processes occurred across different levels with implications for the ways in which VI, CI, and OI interact to develop holistic identity across organisational levels. Implications for theory detail the need to understand and utilise cultural understanding in identity programs as well as the value in developing identity communications which represent an actual rather than an espoused position.
Resumo:
Bronfenbrenner.s Bioecological Model, expressed as the developmental equation, D f PPCT, is the theoretical framework for two studies that bring together diverse strands of psychology to study the work-life interface of working adults. Occupational and organizational psychology is focused on the demands and resources of work and family, without emphasising the individual in detail. Health and personality psychology examine the individual but without emphasis on the individual.s work and family roles. The current research used Bronfenbrenner.s theoretical framework to combine individual differences, work and family to understand how these factors influence the working adult.s psychological functioning. Competent development has been defined as high well-being (measured as life satisfaction and psychological well-being) and high work engagement (as work vigour, work dedication and absorption in work) and as the absence of mental illness (as depression, anxiety and stress) and the absence of burnout (as emotional exhaustion, cynicism and professional efficacy). Study 1 and 2 were linked, with Study 1 as a cross-sectional survey and Study 2, a prospective panel study that followed on from the data used in Study1. Participants were recruited from a university and from a large public hospital to take part in a 3-wave, online study where they completed identical surveys at 3-4 month intervals (N = 470 at Time 1 and N = 198 at Time 3). In Study 1, hierarchical multiple regressions were used to assess the effects of individual differences (Block 1, e.g. dispositional optimism, coping self-efficacy, perceived control of time, humour), work and family variables (Block 2, e.g. affective commitment, skill discretion, work hours, children, marital status, family demands) and the work-life interface (Block 3, e.g. direction and quality of spillover between roles, work-life balance) on the outcomes. There were a mosaic of predictors of the outcomes with a group of seven that were the most frequent significant predictors and which represented the individual (dispositional optimism and coping self-efficacy), the workplace (skill discretion, affective commitment and job autonomy) and the work-life interface (negative work-to-family spillover and negative family-to-work spillover). Interestingly, gender and working hours were not important predictors. The effects of job social support, generally and for work-life issues, perceived control of time and egalitarian gender roles on the outcomes were mediated by negative work-to-family spillover, particularly for emotional exhaustion. Further, the effect of negative spillover on depression, anxiety and work engagement was moderated by the individual.s personal and workplace resources. Study 2 modelled the longitudinal relationships between the group of the seven most frequent predictors and the outcomes. Using a set of non-nested models, the relative influences of concurrent functioning, stability and change over time were assessed. The modelling began with models at Time 1, which formed the basis for confirmatory factor analysis (CFA) to establish the underlying relationships between the variables and calculate the composite variables for the longitudinal models. The CFAs were well fitting with few modifications to ensure good fit. However, using burnout and work engagement together required additional analyses to resolve poor fit, with one factor (representing a continuum from burnout to work engagement) being the only acceptable solution. Five different longitudinal models were investigated as the Well-Being, Mental Distress, Well-Being-Mental Health, Work Engagement and Integrated models using differing combinations of the outcomes. The best fitting model for each was a reciprocal model that was trimmed of trivial paths. The strongest paths were the synchronous correlations and the paths within variables over time. The reciprocal paths were more variable with weak to mild effects. There was evidence of gain and loss spirals between the variables over time, with a slight net gain in resources that may provide the mechanism for the accumulation of psychological advantage over a lifetime. The longitudinal models also showed that there are leverage points at which personal, psychological and managerial interventions can be targeted to bolster the individual and provide supportive workplace conditions that also minimise negative spillover. Bronfenbrenner.s developmental equation has been a useful framework for the current research, showing the importance of the person as central to the individual.s experience of the work-life interface. By taking control of their own life, the individual can craft a life path that is most suited to their own needs. Competent developmental outcomes were most likely where the person was optimistic and had high self-efficacy, worked in a job that they were attached to and which allowed them to use their talents and without too much negative spillover between their work and family domains. In this way, individuals had greater well-being, better mental health and greater work engagement at any one time and across time.
Resumo:
Numerous studies have reported links between insulin-like growth factors (IGFs) and the extra-cellular matrix protein vitronectin (VN). We ourselves have reported that IGF-I binds to VN via IGF-binding proteins (IGFBPs) to stimulate HaCaT and MCF-7 cell migration. Here, we detail the functional evaluation of IGFBP-1, -2, -3, -4 and -6 in the presence and absence of IGF-I and VN. The data presented here, combined with our prior data on IGFBP-5, suggest that IGFBP-3, -4 and -5 are the most effective at stimulating cell migration in combination with IGF-I and VN. In addition, we demonstrate that different regions within IGFBP-3 and -4 are critical for complex formation. Furthermore, we examine whether multi-protein complexes of IGF-I and IGFBPs associated with fibronectin and collagen IV are also able to enhance functional biological responses.
Resumo:
Since its initial proposal in 1998, alkaline hydrothermal processing has rapidly become an established technology for the production of titanate nanostructures. This simple, highly reproducible process has gained a strong research following since its conception. However, complete understanding and elucidation of nanostructure phase and formation have not yet been achieved. Without fully understanding phase, formation, and other important competing effects of the synthesis parameters on the final structure, the maximum potential of these nanostructures cannot be obtained. Therefore this study examined the influence of synthesis parameters on the formation of titanate nanostructures produced by alkaline hydrothermal treatment. The parameters included alkaline concentration, hydrothermal temperature, the precursor material‘s crystallite size and also the phase of the titanium dioxide precursor (TiO2, or titania). The nanostructure‘s phase and morphology was analysed using X-ray diffraction (XRD), Raman spectroscopy and transmission electron microscopy. X-ray photoelectron spectroscopy (XPS), dynamic light scattering (non-invasive backscattering), nitrogen sorption, and Rietveld analysis were used to determine phase, for particle sizing, surface area determinations, and establishing phase concentrations, respectively. This project rigorously examined the effect of alkaline concentration and hydrothermal temperature on three commercially sourced and two self-prepared TiO2 powders. These precursors consisted of both pure- or mixed-phase anatase and rutile polymorphs, and were selected to cover a range of phase concentrations and crystallite sizes. Typically, these precursors were treated with 5–10 M sodium hydroxide (NaOH) solutions at temperatures between 100–220 °C. Both nanotube and nanoribbon morphologies could be produced depending on the combination of these hydrothermal conditions. Both titania and titanate phases are comprised of TiO6 units which are assembled in different combinations. The arrangement of these atoms affects the binding energy between the Ti–O bonds. Raman spectroscopy and XPS were therefore employed in a preliminary study of phase determination for these materials. The change in binding energy from a titania to a titanate binding energy was investigated in this study, and the transformation of titania precursor into nanotubes and titanate nanoribbons was directly observed by these methods. Evaluation of the Raman and XPS results indicated a strengthening in the binding energies of both the Ti (2p3/2) and O (1s) bands which correlated to an increase in strength and decrease in resolution of the characteristic nanotube doublet observed between 320 and 220 cm.1 in the Raman spectra of these products. The effect of phase and crystallite size on nanotube formation was examined over a series of temperatures (100.200 �‹C in 20 �‹C increments) at a set alkaline concentration (7.5 M NaOH). These parameters were investigated by employing both pure- and mixed- phase precursors of anatase and rutile. This study indicated that both the crystallite size and phase affect nanotube formation, with rutile requiring a greater driving force (essentially �\harsher. hydrothermal conditions) than anatase to form nanotubes, where larger crystallites forms of the precursor also appeared to impede nanotube formation slightly. These parameters were further examined in later studies. The influence of alkaline concentration and hydrothermal temperature were systematically examined for the transformation of Degussa P25 into nanotubes and nanoribbons, and exact conditions for nanostructure synthesis were determined. Correlation of these data sets resulted in the construction of a morphological phase diagram, which is an effective reference for nanostructure formation. This morphological phase diagram effectively provides a .recipe book�e for the formation of titanate nanostructures. Morphological phase diagrams were also constructed for larger, near phase-pure anatase and rutile precursors, to further investigate the influence of hydrothermal reaction parameters on the formation of titanate nanotubes and nanoribbons. The effects of alkaline concentration, hydrothermal temperature, crystallite phase and size are observed when the three morphological phase diagrams are compared. Through the analysis of these results it was determined that alkaline concentration and hydrothermal temperature affect nanotube and nanoribbon formation independently through a complex relationship, where nanotubes are primarily affected by temperature, whilst nanoribbons are strongly influenced by alkaline concentration. Crystallite size and phase also affected the nanostructure formation. Smaller precursor crystallites formed nanostructures at reduced hydrothermal temperature, and rutile displayed a slower rate of precursor consumption compared to anatase, with incomplete conversion observed for most hydrothermal conditions. The incomplete conversion of rutile into nanotubes was examined in detail in the final study. This study selectively examined the kinetics of precursor dissolution in order to understand why rutile incompletely converted. This was achieved by selecting a single hydrothermal condition (9 M NaOH, 160 °C) where nanotubes are known to form from both anatase and rutile, where the synthesis was quenched after 2, 4, 8, 16 and 32 hours. The influence of precursor phase on nanostructure formation was explicitly determined to be due to different dissolution kinetics; where anatase exhibited zero-order dissolution and rutile second-order. This difference in kinetic order cannot be simply explained by the variation in crystallite size, as the inherent surface areas of the two precursors were determined to have first-order relationships with time. Therefore, the crystallite size (and inherent surface area) does not affect the overall kinetic order of dissolution; rather, it determines the rate of reaction. Finally, nanostructure formation was found to be controlled by the availability of dissolved titanium (Ti4+) species in solution, which is mediated by the dissolution kinetics of the precursor.
Resumo:
This research shows that gross pollutant traps (GPTs) continue to play an important role in preventing visible street waste—gross pollutants—from contaminating the environment. The demand for these GPTs calls for stringent quality control and this research provides a foundation to rigorously examine the devices. A novel and comprehensive testing approach to examine a dry sump GPT was developed. The GPT is designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. This device has not been previously investigated. Apart from the review of GPTs and gross pollutant data, the testing approach includes four additional aspects to this research, which are: field work and an historical overview of street waste/stormwater pollution, calibration of equipment, hydrodynamic studies and gross pollutant capture/retention investigations. This work is the first comprehensive investigation of its kind and provides valuable practical information for the current research and any future work pertaining to the operations of GPTs and management of street waste in the urban environment. Gross pollutant traps—including patented and registered designs developed by industry—have specific internal configurations and hydrodynamic separation characteristics which demand individual testing and performance assessments. Stormwater devices are usually evaluated by environmental protection agencies (EPAs), professional bodies and water research centres. In the USA, the American Society of Civil Engineers (ASCE) and the Environmental Water Resource Institute (EWRI) are examples of professional and research organisations actively involved in these evaluation/verification programs. These programs largely rely on field evaluations alone that are limited in scope, mainly for cost and logistical reasons. In Australia, evaluation/verification programs of new devices in the stormwater industry are not well established. The current limitations in the evaluation methodologies of GPTs have been addressed in this research by establishing a new testing approach. This approach uses a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The physical model consisted of a 50% scale model GPT rig with screen blockages varying from 0 to 100%. This rig was placed in a 20 m flume and various inlet and outflow operating conditions were modelled on observations made during the field monitoring of GPTs. Due to infrequent cleaning, the retaining screens inside the GPTs were often observed to be blocked with organic matter. Blocked screens can radically change the hydrodynamic and gross pollutant capture/retention characteristics of a GPT as shown from this research. This research involved the use of equipment, such as acoustic Doppler velocimeters (ADVs) and dye concentration (Komori) probes, which were deployed for the first time in a dry sump GPT. Hence, it was necessary to rigorously evaluate the capability and performance of these devices, particularly in the case of the custom made Komori probes, about which little was known. The evaluation revealed that the Komori probes have a frequency response of up to 100 Hz —which is dependent upon fluid velocities—and this was adequate to measure the relevant fluctuations of dye introduced into the GPT flow domain. The outcome of this evaluation resulted in establishing methodologies for the hydrodynamic measurements and gross pollutant capture/retention experiments. The hydrodynamic measurements consisted of point-based acoustic Doppler velocimeter (ADV) measurements, flow field particle image velocimetry (PIV) capture, head loss experiments and computational fluid dynamics (CFD) simulation. The gross pollutant capture/retention experiments included the use of anthropogenic litter components, tracer dye and custom modified artificial gross pollutants. Anthropogenic litter was limited to tin cans, bottle caps and plastic bags, while the artificial pollutants consisted of 40 mm spheres with a range of four buoyancies. The hydrodynamic results led to the definition of global and local flow features. The gross pollutant capture/retention results showed that when the internal retaining screens are fully blocked, the capture/retention performance of the GPT rapidly deteriorates. The overall results showed that the GPT will operate efficiently until at least 70% of the screens are blocked, particularly at high flow rates. This important finding indicates that cleaning operations could be more effectively planned when the GPT capture/retention performance deteriorates. At lower flow rates, the capture/retention performance trends were reversed. There is little difference in the poor capture/retention performance between a fully blocked GPT and a partially filled or empty GPT with 100% screen blockages. The results also revealed that the GPT is designed with an efficient high flow bypass system to avoid upstream blockages. The capture/retention performance of the GPT at medium to high inlet flow rates is close to maximum efficiency (100%). With regard to the design appraisal of the GPT, a raised inlet offers a better capture/retention performance, particularly at lower flow rates. Further design appraisals of the GPT are recommended.
Resumo:
A concise introduction to the key ideas and issues in the study of media economics, drawing on a broad range of case studies - from Amazon and Twitter, to Apple and Netflix - to illustrate how economic paradigms are not just theories, but provide important practical insights into how the media operates today. Understanding the economic paradigms at work in media industries and markets is vitally important for the analysis of the media system as a whole. The changing dynamics of media production, distribution and consumption are stretching the capacity of established economic paradigms. In addition to succinct accounts of neo-classical and critical political economics, the text offers fresh perspectives for understanding media drawn from two 'heterodox' approaches: institutional economics and evolutionary economics. Applying these paradigms to vital topics and case studies, Media Economics stresses the value – and limits – of contending economic approaches in understanding how the media operates today. It is essential reading for all students of Media and Communication Studies, and also those from Economics, Policy Studies, Business Studies and Marketing backgrounds who are studying the media. Table of Contents: 1. Media Economics: The Mainstream Approach 2. Critical Political Economy of the Media 3. Institutional Economics 4. Evolutionary Economics 5. Case Studies and Conclusions
Resumo:
The discussion begins with a discussion of soft power and creativity in contemporary China. The article then examines three development trajectories: territory, technology and taste. The third section examines the effects of taste in more detail through examples of China's creativity in art, philosophy and technology primarily in three key periods, the Western Zhou, Han, and Song The principal argument is that while China’s cultural authority was established on deep Confucian roots, its international influence, and its creativity, is indebted to periods of openness to ideas.
Resumo:
As the research landscape continues to change with new technologies, advances in data management and new means, expectations and polices surrounding scholarly communication, the role of the Library and Librarian in supporting research is shifting. At the Queensland University of Technology (QUT), the Library has made a positive impact on the scholarly communication practices of QUT researchers in the last decade in several ways: � 1. A university-wide deposit mandate on self-archiving was introduced in 2003. It states that QUT authors must place the author’s accepted manuscript version of refereed research articles and conference papers in the digital repository QUT ePrints. 2. Liaison Librarians remind their researchers to self-deposit their accepted manuscript versions of peer-reviewed research outputs into QUT ePrints, and provide training and support when needed. 3. The Library pays author publication fees for true gold road open access publishers including: BioMed Central, Public Library of Science, Hindawi Press. Liaison Librarians actively assist researchers in the gold road publishing process.� Liaison Librarians play a key role in educating their researchers on university policy and the latest advances in scholarly communication. However, their knowledge and skills related to scholarly communication practices have largely been learnt on the job or self-taught. This poster presents the results of a survey where QUT Liaison Librarians rated their skills in various practices related to eResearch, including scholarly communication.
Resumo:
Thin bed technology for clay/ concrete masonry is gaining popularity in many parts of the developed economy in recent times through active engagement of the industry with the academia. One of the main drivers for the development of thin bed technology is the progressive contraction of the professional brick and block laying workforce as the younger generation is not attracted towards this profession due to the general perception of the society towards manual work as being outdated in the modern digital economy. This situation has led to soaring cost of skilled labour associated with the general delay in completion of construction activities in recent times. In parallel, the advent of manufacturing technologies in producing bricks and blocks with adherence to specified dimensions and shapes and several rapid setting binders are other factors that have contributed to the development of thin bed technology. Although this technology is still emerging, especially for applications to earthquake prone regions, field applications are reported in Germany for over a few decades and in Italy since early 2000. The Australian concrete masonry industry has recently taken keen interest in pursuing research with a view to developing this technology. This paper presents the background information including review of literature and pilot studies that have been carried out to enable planning of the development of thin bed technology. The paper concludes with recommendations for future research.
Resumo:
Process modeling is an emergent area of Information Systems research that is characterized through an abundance of conceptual work with little empirical research. To fill this gap, this paper reports on the development and validation of an instrument to measure user acceptance of process modeling grammars. We advance an extended model for a multi-stage measurement instrument development procedure, which incorporates feedback from both expert and user panels. We identify two main contributions: First, we provide a validated measurement instrument for the study of user acceptance of process modeling grammars, which can be used to assist in further empirical studies that investigate phenomena associated with the business process modeling domain. Second, in doing so, we describe in detail a procedural model for developing measurement instruments that ensures high levels of reliability and validity, which may assist fellow scholars in executing their empirical research.
Resumo:
The main objective of this paper is to detail the development of a feasible hardware design based on Evolutionary Algorithms (EAs) to determine flight path planning for Unmanned Aerial Vehicles (UAVs) navigating terrain with obstacle boundaries. The design architecture includes the hardware implementation of Light Detection And Ranging (LiDAR) terrain and EA population memories within the hardware, as well as the EA search and evaluation algorithms used in the optimizing stage of path planning. A synthesisable Very-high-speed integrated circuit Hardware Description Language (VHDL) implementation of the design was developed, for realisation on a Field Programmable Gate Array (FPGA) platform. Simulation results show significant speedup compared with an equivalent software implementation written in C++, suggesting that the present approach is well suited for UAV real-time path planning applications.
Resumo:
Mobile sensor platforms such as Autonomous Underwater Vehicles (AUVs) and robotic surface vessels, combined with static moored sensors compose a diverse sensor network that is able to provide macroscopic environmental analysis tool for ocean researchers. Working as a cohesive networked unit, the static buoys are always online, and provide insight as to the time and locations where a federated, mobile robot team should be deployed to effectively perform large scale spatiotemporal sampling on demand. Such a system can provide pertinent in situ measurements to marine biologists whom can then advise policy makers on critical environmental issues. This poster presents recent field deployment activity of AUVs demonstrating the effectiveness of our embedded communication network infrastructure throughout southern California coastal waters. We also report on progress towards real-time, web-streaming data from the multiple sampling locations and mobile sensor platforms. Static monitoring sites included in this presentation detail the network nodes positioned at Redondo Beach and Marina Del Ray. One of the deployed mobile sensors highlighted here are autonomous Slocum gliders. These nodes operate in the open ocean for periods as long as one month. The gliders are connected to the network via a Freewave radio modem network composed of multiple coastal base-stations. This increases the efficiency of deployment missions by reducing operational expenses via reduced reliability on satellite phones for communication, as well as increasing the rate and amount of data that can be transferred. Another mobile sensor platform presented in this study are the autonomous robotic boats. These platforms are utilized for harbor and littoral zone studies, and are capable of performing multi-robot coordination while observing known communication constraints. All of these pieces fit together to present an overview of ongoing collaborative work to develop an autonomous, region-wide, coastal environmental observation and monitoring sensor network.
Resumo:
This article investigates virtual reality representations of performance in London’s late sixteenth-century Rose Theatre, a venue that, by means of current technology, can once again challenge perceptions of space, performance, and memory. The VR model of The Rose represents a virtual recreation of this venue in as much detail as possible and attempts to recover graphic demonstrations of the trace memories of the performance modes of the day. The VR model is based on accurate archeological and theatre historical records and is easy to navigate. The introduction of human figures onto The Rose’s stage via motion capture allows us to explore the relationships between space, actor and environment. The combination of venue and actors facilitates a new way of thinking about how the work of early modern playwrights can be stored and recalled. This virtual theatre is thus activated to intersect productively with contemporary studies in performance; as such, our paper provides a perspective on and embodiment of the relation between technology, memory and experience. It is, at its simplest, a useful archiving project for theatrical history, but it is directly relevant to contemporary performance practice as well. Further, it reflects upon how technology and ‘re-enactments’ of sorts mediate the way in which knowledge and experience are transferred, and even what may be considered ‘knowledge.’ Our work provides opportunities to begin addressing what such intermedial confrontations might produce for ‘remembering, experiencing, thinking and imagining.’ We contend that these confrontations will enhance live theatre performance rather than impeding or disrupting contemporary performance practice. Our ‘paper’ is in the form of a video which covers the intellectual contribution while also permitting a demonstration of the interventions we are discussing.
Resumo:
DNA exists predominantly in a duplex form that is preserved via specific base pairing. This base pairing affords a considerable degree of protection against chemical or physical damage and preserves coding potential. However, there are many situations, e.g. during DNA damage and programmed cellular processes such as DNA replication and transcription, in which the DNA duplex is separated into two singlestranded DNA (ssDNA) strands. This ssDNA is vulnerable to attack by nucleases, binding by inappropriate proteins and chemical attack. It is very important to control the generation of ssDNA and protect it when it forms, and for this reason all cellular organisms and many viruses encode a ssDNA binding protein (SSB). All known SSBs use an oligosaccharide/oligonucleotide binding (OB)-fold domain for DNA binding. SSBs have multiple roles in binding and sequestering ssDNA, detecting DNA damage, stimulating strand-exchange proteins and helicases, and mediation of protein–protein interactions. Recently two additional human SSBs have been identified that are more closely related to bacterial and archaeal SSBs. Prior to this it was believed that replication protein A, RPA, was the only human equivalent of bacterial SSB. RPA is thought to be required for most aspects of DNA metabolism including DNA replication, recombination and repair. This review will discuss in further detail the biological pathways in which human SSBs function.
Resumo:
Gay community media functions as a system with three nodes, in which the flows of information and capital theoretically benefit all parties: the gay community gains a sense of cohesion and citizenship through media; the gay media outlets profit from advertisers’ capital; and advertisers recoup their investments in lucrative ‘pink dollar’ revenue. But if a necessary corollary of all communication systems is error or noise, where—and what—are the errors in this system? In this paper we argue that the ‘error’ in the gay media system is Queerness, and that the gay media system ejects (in a process of Kristevan abjection) these Queer identities in order to function successfully. We examine the ways in which Queer identities are excluded from representation in such media through a discourse and content analysis of The Sydney Star Observer (Australia’s largest gay and lesbian paper). First, we analyse the way Queer bodies are excluded from the discourses that construct and reinforce both the ideal gay male body and the notions of homosexual essence required for that body to be meaningful. We then argue that abject Queerness returns in the SSO’s discourses of public health through the conspicuous absence of the AIDS-inflicted body (which we read as the epitome of the abject Queer), since this absence paradoxically conjures up a trace of that which the system tries to expel. We conclude by arguing that because the ‘Queer error’ is integral to the SSO, gay community media should practise a politics of Queer inclusion rather than exclusion.