828 resultados para Structural strategic sector analysis
Resumo:
This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity change and Simar and Wilson’s (J Econ, 136:31–64, 2007) bootstrapped truncated regression approach. In the first stage, the nonparametric data envelopment analysis is used to measure technical efficiency. To quantify the economic drivers underlying inefficiencies, the second stage employs a bootstrapped truncated regression whereby bias-corrected efficiency estimates are regressed against explanatory variables. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Most industries were technically inefficient throughout the period except for ‘Pharmaceutical Products’. Sources of efficiency were attributed to quality of worker and flexible work arrangements while incessant use of foreign workers lowered efficiency.
Resumo:
Denaturation of tissues can provide a unique biological environment for regenerative medicine application only if minimal disruption of their microarchitecture is achieved during the decellularization process. The goal is to keep the structural integrity of such a construct as functional as the tissues from which they were derived. In this work, cartilage-on-bone laminates were decellularized through enzymatic, non-ionic and ionic protocols. This work investigated the effects of decellularization process on the microarchitecture of cartiligous extracellular matrix; determining the extent of how each process deteriorated the structural organization of the network. High resolution microscopy was used to capture cross-sectional images of samples prior to and after treatment. The variation of the microarchitecture was then analysed using a well defined fast Fourier image processing algorithm. Statistical analysis of the results revealed how significant the alternations among aforementioned protocols were (p < 0.05). Ranking the treatments by their effectiveness in disrupting the ECM integrity, they were ordered as: Trypsin> SDS> Triton X-100.
Resumo:
The research team recognized the value of network-level Falling Weight Deflectometer (FWD) testing to evaluate the structural condition trends of flexible pavements. However, practical limitations due to the cost of testing, traffic control and safety concerns and the ability to test a large network may discourage some agencies from conducting the network-level FWD testing. For this reason, the surrogate measure of the Structural Condition Index (SCI) is suggested for use. The main purpose of the research presented in this paper is to investigate data mining strategies and to develop a prediction method of the structural condition trends for network-level applications which does not require FWD testing. The research team first evaluated the existing and historical pavement condition, distress, ride, traffic and other data attributes in the Texas Department of Transportation (TxDOT) Pavement Maintenance Information System (PMIS), applied data mining strategies to the data, discovered useful patterns and knowledge for SCI value prediction, and finally provided a reasonable measure of pavement structural condition which is correlated to the SCI. To evaluate the performance of the developed prediction approach, a case study was conducted using the SCI data calculated from the FWD data collected on flexible pavements over a 5-year period (2005 – 09) from 354 PMIS sections representing 37 pavement sections on the Texas highway system. The preliminary study results showed that the proposed approach can be used as a supportive pavement structural index in the event when FWD deflection data is not available and help pavement managers identify the timing and appropriate treatment level of preventive maintenance activities.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
For over half a century art directors within the advertising industry have been adapting to the changes occurring in media, culture and the corporate sector, toward enhancing professional performance and competitiveness. These professionals seldom offer explicit justification about the role images play in effective communication. It is uncertain how this situation affects advertising performance, because advertising has, nevertheless, evolved in parallel to this as an industry able to fabricate new opportunities for itself. However, uncertainties in the formalization of art direction knowledge restrict the possibilities of knowledge transfer in higher education. The theoretical knowledge supporting advertising art direction has been adapted spontaneously from disciplines that rarely focus on specific aspects related to the production of advertising content, like, for example: marketing communication, design, visual communication, or visual art. Meanwhile, in scholarly research, vast empirical knowledge has been generated about advertising images, but often with limited insight into production expertise. Because art direction is understood as an industry practice and not as an academic discipline, an art direction perspective in scholarly contributions is rare. Scholarly research that is relevant to art direction seldom offers viewpoints to help understand how it is that research outputs may specifically contribute to art direction practices. This thesis is dedicated to formally understanding the knowledge underlying art direction and using it to explore models for visual analysis and knowledge transfer in higher education. The first three chapters of this thesis offer, firstly, a review of practical and contextual aspects that help define art direction, as a profession and as a component in higher education; secondly, a discussion about visual knowledge; and thirdly, a literature review of theoretical and analytic aspects relevant to art direction knowledge. Drawing on these three chapters, this thesis establishes explicit structures to help in the development of an art direction curriculum in higher education programs. Following these chapters, this thesis explores a theoretical combination of the terms ‘aesthetics’ and ‘strategy’ as foundational notions for the study of art direction. The theoretical exploration of the term ‘strategic aesthetics’ unveils the potential for furthering knowledge in visual commercial practices in general. The empirical part of this research explores ways in which strategic aesthetics notions can extend to methodologies of visual analysis. Using a combination of content analysis and of structures of interpretive analysis offered in visual anthropology, this research discusses issues of methodological appropriation as it shifts aspects of conventional methodologies to take into consideration paradigms of research that are producer-centred. Sampled out of 2759 still ads from the online databases of Cannes Lions Festival, this study uses an instrumental case study of love-related advertising to facilitate the analysis of content. This part of the research helps understand the limitations and functionality of the theoretical and methodological framework explored in the thesis. In light of the findings and discussions produced throughout the thesis, this project aims to provide directions for higher education in relation to art direction and highlights potential pathways for further investigation of strategic aesthetics.
Resumo:
Selected chrysocolla mineral samples from different origins have been studied by using PXRD, SEM, EDX and XPS. The XRD patterns show that the chrysocolla mineral samples are non-diffracting and no other phases are present in the minerals, thus showing the chrysocolla samples are pure. SEM analyses show the chrysocolla surfaces are featureless. EDX analyses enable the formulae of the chrysocolla samples to be calculated. The thermal decomposition of the mineral chrysocolla has been studied using a combination of thermogravimetric analysis and derivative thermogravimetric analysis. Five thermal decomposition mass loss steps are observed for the chrysocolla from Arizona (a) at 125 ◦C with the loss of water, (b) at 340 ◦C with the loss of hydroxyl units, (c) at 468.5 ◦C with a further loss of hydroxyls, (d) at 821 ◦C with oxygen loss and (e) at 895 ◦C with a further loss of oxygen. The thermal analysis of the chrysocolla from Congo shows mass losses at 125, 275.3, 805.6 and 877.4 ◦C and for the Nevada chrysocolla, mass loss steps at 268, 333, 463, 786.0 and 817.7 ◦C are observed. The thermal analysis of spertiniite is very different from that of chrysocolla and thermally decomposes at around 160 ◦C. XPS shows that there are two different copper species present, one which is bonded to oxygen and one to a hydroxyl unit. The O 1s is broad and very symmetrical suggesting two O species of equal number. The bond energy of 102.9 eV for the Si 2p suggests that it is in the form of a silicate. The bond energy is much higher for silicas around ∼103.5 eV. The reported value for silica gel has Si 2p at 103.4 eV. The combination of TG, PXRD, EDX and XPS adds to our fundamental knowledge of the structure of chrysocolla.
Resumo:
Boron–nitrogen containing compounds with high hydrogen contents as represented by ammonia borane (NH3BH3) have recently attracted intense interest for potential hydrogen storage applications. One such compound is [(NH3)2BH2]B3H8 with a capacity of 18.2 wt% H. Two safe and efficient synthetic routes to [(NH3)2BH2]B3H8 have been developed for the first time since it was discovered 50 years ago. The new synthetic routes avoid a dangerous starting chemical, tetraborane (B4H10), and afford a high yield. Single crystal X-ray diffraction analysis reveals N–Hδ+Hδ−–B dihydrogen interactions in the [(NH3)2BH2]B3H8·18-crown-6 adduct. Extended strong dihydrogen bonds were observed in pure [(NH3)2BH2]B3H8 through crystal structure solution based upon powder X-ray analysis. Pyrolysis of [(NH3)2BH2]B3H8 leads to the formation of hydrogen gas together with appreciable amounts of volatile boranes below 160 °C.
Resumo:
Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.
Resumo:
The purpose of this study is to elaborate shared schema change theory in the context of the radical restructuring-commercialization of an Australian public infrastructure organization. Commercialization of the case organization imposed high individual and collective cognitive processing and emotional demands as organizational members sought to develop new shared schema. Existing schema change research suggests that radical restructuring renders pre-existing shared schema irrelevant and triggers new schema development through experiential learning (Balogun and Johnson, 2004). Focus groups and semi-structured interviews were conducted at four points over a three-year period. The analysis revealed that shared schema change occurred in three broad phases: (1) radical restructuring and aftermath; (2) new CEO and new change process schema, and: (3) large-group meeting and schema change. Key findings include: (1) radical structural change does not necessarily trigger new shared schema development as indicated in prior research; (2) leadership matters, particularly in framing new means-ends schema; (3) how change leader interventions are sequenced has an important influence on shared schema change, and; (4) the creation of facilitated social processes have an important influence on shared schema change.
Resumo:
This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity (TFP) change and Simar and Wilson’s (2007) bootstrapped truncated regression approach which first derives bias-corrected efficiency estimates before being regressed against explanatory variables to help quantify sources of inefficiencies. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Sources of efficiency were attributed to quality of worker and flexible work arrangements while the use of foreign workers lowered efficiency.
Resumo:
Being across new knowledge is critical to the survival of individual businesses. This study explored the way in which managers of small social services in Queensland identified important new knowledge and brought this into their organisations. New knowledge was found to be highly valued by managers with key resources allocated to knowledge seeking processes particularly in response to regulatory change. Knowledge absorption involved accessing multiple sources, and external professional networks were found to be critical to understanding and integrating new knowledge. The research highlighted the challenges in securing new knowledge and the importance of managers professional links.
Resumo:
The number of office building retrofit projects is increasing. These projects are characterised by processes which have a close relationship with waste generation and therefore demand a high level of waste management. In a preliminary study reported separately, we identified seven critical factors of on-site waste generation in office building retrofit projects. Through semi-structured interviews and Interpretive Structural Modelling, this research further investigated the interrelationships among these critical waste factors, to identify each factor’s level of influence on waste generation and propose effective solutions for waste minimization. “Organizational commitment” was identified as the fundamental issue for waste generation in the ISM system. Factors related to plan, design and construction processes were found to be located in the middle levels of the ISM model but still had significant impacts on the system as a whole. Based on the interview findings and ISM analysis results, some practical solutions were proposed for waste minimization in building retrofit projects: (1) reusable and adaptable fit-out design; (2) a system for as-built drawings and building information; (3) integrated planning for retrofitting work process and waste management; and (4) waste benchmarking development for retrofit projects. This research will provide a better understanding of waste issues associated with building retrofit projects and facilitate enhanced waste minimization.
Resumo:
The validity of using rainfall characteristics as lumped parameters for investigating the pollutant wash-off process such as first flush occurrence is questionable. This research study introduces an innovative concept of using sector parameters to investigate the relationship between the pollutant wash-off process and different sectors of the runoff hydrograph and rainfall hyetograph. The research outcomes indicated that rainfall depth and rainfall intensity are two key rainfall characteristics which influence the wash-off process compared to the antecedent dry period. Additionally, the rainfall pattern also plays a critical role in the wash-off process and is independent of the catchment characteristics. The knowledge created through this research study provides the ability to select appropriate rainfall events for stormwater quality treatment design based on the required treatment outcomes such as the need to target different sectors of the runoff hydrograph or pollutant species. The study outcomes can also contribute to enhancing stormwater quality modelling and prediction in view of the fact that conventional approaches to stormwater quality estimation is primarily based on rainfall intensity rather than considering other rainfall parameters or solely based on stochastic approaches irrespective of the characteristics of the rainfall event.