207 resultados para HOMOGENEOUS COPOLYMERS
Resumo:
The existence of the Macroscopic Fundamental Diagram (MFD), which relates network space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since the MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. The key requirements for the well-defined MFD is the homogeneity of the area wide traffic condition, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take drivers’ behaviour under real time information provision into account, which has a significant impact on the shape of the MFD. This research aims to demonstrate the impact of drivers’ route choice behaviour on network performance by employing the MFD as a measurement. A microscopic simulation is chosen as an experimental platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers as well as by taking different route choice parameters, various scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance and the MFD shape. This study confirmed and addressed the impact of information provision on the MFD shape and highlighted the significance of the route choice parameter setting as an influencing factor in the MFD analysis.
Resumo:
Detailed spectroscopic and chemical investigation of matioliite, including infrared and Raman spectroscopy, scanning electron microscopy and electron probe microanalysis has been carried out on homogeneous samples from the Gentil pegmatite, Mendes Pimentel, Minas Gerais, Brazil. The chemical composition is (wt.%): FeO 2.20, CaO 0.05, Na2O 1.28, MnO 0.06, Al2O3 39.82, P2O5 42.7, MgO 4.68, F 0.02 and H2O 9.19; total 100.00. The mineral crystallize in the monoclinic crystal system, C2/c space group, with a = 25.075(1) Å, b = 5.0470(3) Å, c = 13.4370(7) Å, β = 110.97(3)°, V = 1587.9(4) Å3, Z = 4. Raman spectroscopy coupled with infrared spectroscopy supports the concept of phosphate, hydrogen phosphate and dihydrogen phosphate units in the structure of matioliite. Infrared and Raman bands attributed to water and hydroxyl stretching modes are identified. Vibrational spectroscopy adds useful information to the molecular structure of matioliite.
Resumo:
Vibration Based Damage Identification Techniques which use modal data or their functions, have received significant research interest in recent years due to their ability to detect damage in structures and hence contribute towards the safety of the structures. In this context, Strain Energy Based Damage Indices (SEDIs), based on modal strain energy, have been successful in localising damage in structuers made of homogeneous materials such as steel. However, their application to reinforced concrete (RC) structures needs further investigation due to the significant difference in the prominent damage type, the flexural crack. The work reported in this paper is an integral part of a comprehensive research program to develop and apply effective strain energy based damage indices to assess damage in reinforced concrete flexural members. This research program established (i) a suitable flexural crack simulation technique, (ii) four improved SEDI's and (iii) programmable sequentional steps to minimise effects of noise. This paper evaluates and ranks the four newly developed SEDIs and existing seven SEDIs for their ability to detect and localise flexural cracks in RC beams. Based on the results of the evaluations, it recommends the SEDIs for use with single and multiple vibration modes.
Resumo:
Knowledge Management (KM) is a process that focuses on knowledge-related activities to facilitate knowledge creation, capture, transformation and use, with the ultimate aim of leveraging organisations’ intellectual capital to achieve organisational objectives. Organisational culture and climate have been identified as major catalysts to knowledge creation and sharing, and hence are considered important dimensions of KM research. The fragmented and hierarchical nature of the construction industry illustrates its difficulties to operate in a co-ordinated and homogeneous way when dealing with knowledge-related issues such as research and development, training and innovation. The culture and climate of organisations operating within the construction industry are profoundly shaped by the long-established characteristics of the industry, whilst also being influenced by the changes within the sector. Meanwhile, the special project-based structure of construction organisations constitutes additional challenges in facing knowledge production. The study this paper reports on addresses the impact of organisational culture and climate on the intensity of KM activities within construction organisations, with specific focus on the managerial activities that help to manage these challenges and to facilitate KM. A series of semi-structured interviews were undertaken to investigate the KM activities of the contractors operating in Hong Kong. The analysis on the qualitative data revealed that leadership on KM, innovation management, communication management and IT development were key factors that impact positively on the KM activities within the organisations under investigation.
Resumo:
Background: The growing proportion of older adults in Australia is predicted to comprise 23% of the population by 2030. Accordingly, an increasing number of older drivers and fatal crashes of these drivers could also be expected. While the cognitive and physiological limitations of ageing and their road safety implications have been widely documented, research has generally considered older drivers as a homogeneous group. Knowledge of age-related crash trends within the older driver group itself is currently limited. Objective: The aim of this research was to identify age-related differences in serious road crashes of older drivers. This was achieved by comparing crash characteristics between older and younger drivers and between sub-groups of older drivers. Particular attention was paid to serious crashes (crashes resulting in hospitalisation and fatalities) as they place the greatest burden on the Australian health system. Method: Using Queensland Crash data, a total of 191,709 crashes of all-aged drivers (17–80+) over a 9-year period were analysed. Crash patterns of drivers’ aged 17–24, 25–39, 40–49, 50–59, 60–69, 70–79 and 80+ were compared in terms of crash severity (e.g., fatal), at fault levels, traffic control measures (e.g., stop signs) and road features (e.g., intersections). Crashes of older driver sub-groups (60–69, 70–79, 80+) were also compared to those of middle-aged drivers (40–49 and 50–59 combined, who were identified as the safest driving cohort) with respect to crash-related traffic control features and other factors (e.g., speed). Confounding factors including speed and crash nature (e.g., sideswipe) were controlled for. Results and discussion: Results indicated that patterns of serious crashes, as a function of crash severity, at-fault levels, road conditions and traffic control measures, differed significantly between age groups. As a group, older drivers (60+) represented the greatest proportion of crashes resulting in fatalities and hospitalisation, as well as those involving uncontrolled intersections and failure to give way. The opposite was found for middle-aged drivers, although they had the highest proportion of alcohol and speed-related crashes when compared to older drivers. Among all older drivers, those aged 60–69 were least likely to be involved in or the cause of crashes, but most likely to crash at interchanges and as a result of driving while fatigued or after consuming alcohol. Drivers aged 70–79 represented a mid-range level of crash involvement and culpability, and were most likely to crash at stop and give way signs. Drivers aged 80 years and beyond were most likely to be seriously injured or killed in, and at-fault for, crashes, and had the greatest number of crashes at both conventional and circular intersections. Overall, our findings highlight the heterogeneity of older drivers’ crash patterns and suggest that age-related differences must be considered in measures designed to improve older driver safety.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
This paper describes an empirical study to test the proposition that all construction contract bidders are homogeneous ie. they can be treated as behaving collectively in an identical (statistical) manner. Examination of previous analyses of bidding data reveals a flaw in the method of standardising bids across different size contracts and a new procedure is proposed which involves the estimation of a contract datum. Three independent sets of bidding data were then subjected to this procedure and estimates of the necessary distributional parameters obtained. These were then tested against the bidder homogeneity assumption resulting in the conclusion that the assumption may be appropriate for a three parameter log-normal shape, but not for scale and location.
Resumo:
Amphiphilic poly(ethylene glycol)-block-pol (dimethylsiloxane)-block-poly(ethylene glycol)(PEG-block-PDMS block-PEG) triblock copolymers have been successfully prepared via hydrosilylation using discrete and polydisperse PEG of various chain lengths. Facile synthesis of discrete PEG (dPEG) is achieved via systematic tosylation and etherification of lower glycols. Amphiphilicity of the dPEG block-PDMS-block-dPEG triblock copolymer is illustrated by dynamic light scattering (DLS) and measurement of the critical micelle concentration (CMC).
Resumo:
This study aims to open-up the black box of the boardroom by directly observing directors’ interactions during meetings to better understand board processes. Design/methodology/approach: We analyse videotaped observations of board meetings at two Australian companies to develop insights into what directors do in meetings and how they participate in decision-making processes. The direct observations are triangulated with semi-structured interviews, mini-surveys and document reviews. Findings: Our analyses lead to two key findings: (i) while board meetings appear similar at a surface-level, boardroom interactions vary significantly at a deeper level (i.e. board members participate differently during different stages of discussions) and (ii) factors at multiple levels of analysis explain differences in interaction patterns, revealing the complex and nested nature of boardroom discussions. Research implications: By documenting significant intra- and inter-board meeting differences our study (i) challenges the widespread notion of board meetings as rather homogeneous and monolithic, (ii) points towards agenda items as a new unit of analysis (iii) highlights the need for more multi-level analyses in a board setting. Practical implications: While policy makers have been largely occupied with the “right” board composition, our findings suggest that decision outcomes or roles’ execution could be potentially affected by interactions at a board level. Differences in board meeting styles might explain prior ambiguous board structure-performance results, enhancing the need for greater normative consideration of how boards do their work. Originality/value: Our study complements existing research on boardroom dynamics and provides a systematic account of director interactions during board meetings.
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
Purpose: The precise shape of the three-dimensional dose distributions created by intensity-modulated radiotherapy means that the verification of patient position and setup is crucial to the outcome of the treatment. In this paper, we investigate and compare the use of two different image calibration procedures that allow extraction of patient anatomy from measured electronic portal images of intensity-modulated treatment beams. Methods and Materials: Electronic portal images of the intensity-modulated treatment beam delivered using the dynamic multileaf collimator technique were acquired. The images were formed by measuring a series of frames or segments throughout the delivery of the beams. The frames were then summed to produce an integrated portal image of the delivered beam. Two different methods for calibrating the integrated image were investigated with the aim of removing the intensity modulations of the beam. The first involved a simple point-by-point division of the integrated image by a single calibration image of the intensity-modulated beam delivered to a homogeneous polymethyl methacrylate (PMMA) phantom. The second calibration method is known as the quadratic calibration method and required a series of calibration images of the intensity-modulated beam delivered to different thicknesses of homogeneous PMMA blocks. Measurements were made using two different detector systems: a Varian amorphous silicon flat-panel imager and a Theraview camera-based system. The methods were tested first using a contrast phantom before images were acquired of intensity-modulated radiotherapy treatment delivered to the prostate and pelvic nodes of cancer patients at the Royal Marsden Hospital. Results: The results indicate that the calibration methods can be used to remove the intensity modulations of the beam, making it possible to see the outlines of bony anatomy that could be used for patient position verification. This was shown for both posterior and lateral delivered fields. Conclusions: Very little difference between the two calibration methods was observed, so the simpler division method, requiring only the single extra calibration measurement and much simpler computation, was the favored method. This new method could provide a complementary tool to existing position verification methods, and it has the advantage that it is completely passive, requiring no further dose to the patient and using only the treatment fields.
Resumo:
Groundwater flow models are usually characterized as being either transient flow models or steady state flow models. Given that steady state groundwater flow conditions arise as a long time asymptotic limit of a particular transient response, it is natural for us to seek a finite estimate of the amount of time required for a particular transient flow problem to effectively reach steady state. Here, we introduce the concept of mean action time (MAT) to address a fundamental question: How long does it take for a groundwater recharge process or discharge processes to effectively reach steady state? This concept relies on identifying a cumulative distribution function, $F(t;x)$, which varies from $F(0;x)=0$ to $F(t;x) \to \infty$ as $t\to \infty$, thereby providing us with a measurement of the progress of the system towards steady state. The MAT corresponds to the mean of the associated probability density function $f(t;x) = \dfrac{dF}{dt}$, and we demonstrate that this framework provides useful analytical insight by explicitly showing how the MAT depends on the parameters in the model and the geometry of the problem. Additional theoretical results relating to the variance of $f(t;x)$, known as the variance of action time (VAT), are also presented. To test our theoretical predictions we include measurements from a laboratory–scale experiment describing flow through a homogeneous porous medium. The laboratory data confirms that the theoretical MAT predictions are in good agreement with measurements from the physical model.
Resumo:
The existence of Macroscopic Fundamental Diagram (MFD), which relates space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. One of the key requirements for well-defined MFD is the homogeneity of the area-wide traffic condition with links of similar properties, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take the impact of drivers’ behaviour and information provision into account, which has a significant impact on simulation outputs. This research aims to demonstrate the effect of dynamic information provision on network performance by employing the MFD as a measurement. A microscopic simulation, AIMSUN, is chosen as an experiment platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers different scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance with respect to the MFD shape as well as other indicators, such as total travel time. This study confirmed the impact of information provision on the MFD shape, and addressed the usefulness of the MFD for measuring the dynamic information provision benefit.
Resumo:
Bulk amount of graphite oxide was prepared by oxidation of graphite using the modified Hummers method and its ultrasonication in organic solvents yielded graphene oxide (GO). X-ray diffraction (XRD) pattern, X-ray photoelectron (XPS), Raman and Fourier transform infrared (FTIR) spectroscopy indicated the successful preparation of GO. XPS survey spectrum of GO revealed the presence of 66.6 at% C and 30.4 at% O. Scanning electron microscopy (SEM) and Transmission electron microscopy (TEM) images of the graphene oxide showed that they consist of a large amount of graphene oxide platelets with a curled morphology containing of a thin wrinkled sheet like structure. AFM image of the exfoliated GO signified that the average thickness of GO sheets is ~1.0 nm which is very similar to GO monolayer. GO/epoxy nanocomposites were prepared by typical solution mixing technique and influence of GO on mechanical and thermal properties of nanocomposites were investigated. As for the mechanical behaviour of GO/epoxy nanocomposites, 0.5 wt% GO in the nanocomposite achieved the maximum increase in the elastic modulus (~35%) and tensile strength (~7%). The TEM analysis provided clear image of microstructure with homogeneous dispersion of GO in the polymer matrix. The improved strength properties of GO/epoxy nanocomposites can be attributed to inherent strength of GO, the good dispersion and the strong interfacial interactions between the GO sheets and the polymer matrix. However, incorporation of GO showed significant negative effect on composite glass transition temperature (Tg). This may arise due to the interference of GO on curing reaction of epoxy.
Resumo:
Large infrastructure projects are a major responsibility of urban and regional governments, who usually lack expertise to fully specify the demanded projects. Contractors, typically experts on such projects due to experience with similar projects elsewhere, advise of the needed design in their bids. Producing the right design is nevertheless costly. We model such infrastructure projects taking into account their credence goods feature and the costly design effort they require and examine the performance of commonly used contracting methods. We show that when building costs are homogeneous and public information, multi-stage competitive bidding involving shortlisting of two contractors and contingent compensation of both contractors on design efforts outperforms sequential search and the traditional Design-and-Build approach. If building costs are private information of the contractors and are revealed to them after design cost is sunk, sequential search may be superior to the other two methods.