946 resultados para static structure factor


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of animal sera for the culture of therapeutically important cells impedes the clinical use of the cells. We sought to characterize the functional response of human mesenchymal stem cells (hMSCs) to specific proteins known to exist in bone tissue with a view to eliminating the requirement of animal sera. Insulin-like growth factor-I (IGF-I), via IGF binding protein-3 or -5 (IGFBP-3 or -5) and transforming growth factor-beta 1 (TGF-beta(1)) are known to associate with the extracellular matrix (ECM) protein vitronectin (VN) and elicit functional responses in a range of cell types in vitro. We found that specific combinations of VN, IGFBP-3 or -5, and IGF-I or TGF-beta(1) could stimulate initial functional responses in hMSCs and that IGF-I or TGF-beta(1) induced hMSC aggregation, but VN concentration modulated this effect. We speculated that the aggregation effect may be due to endogenous protease activity, although we found that neither IGF-I nor TGF-beta(1) affected the functional expression of matrix metalloprotease-2 or -9, two common proteases expressed by hMSCs. In summary, combinations of the ECM and growth factors described herein may form the basis of defined cell culture media supplements, although the effect of endogenous protease expression on the function of such proteins requires investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The approach to remove green house gases by pumping liquid CO2 several kilometres below the ground implies that many carbonate containing minerals will be formed. Among these minerals the formation of dypingite and artinite are possible; thus necessitating a study of such minerals. Two carbonate bearing minerals dypingite and artinite with a hydrotalcite related formulae have been characterised by a combination of infrared and near-infrared spectroscopy. The infrared spectra of both minerals are characterised by OH and water stretching vibrations. Both the first and second fundamental overtones of these bands are observed in the NIR spectra in the 7030 to 7235 cm-1 and 10490 to 10570 cm-1. Intense (CO3)2- symmetric and antisymmetric stretching vibrations confirm the distortion of the carbonate anion. The position of the water bending vibration indicates water is strongly hydrogen bonded to the carbonate anion in the mineral structure. Split NIR bands at around 8675 and 11100 cm-1 indicates that some replacement of magnesium ions by ferrous ions in the mineral structure has occurred.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aggregate structure which occurs in aqueous smectitic suspensions is responsible for poor water clarification, difficulties in sludge dewatering and the unusual rheological behaviour of smectite rich soils. These macroscopic properties are dictated by the 3-D structural arrangement of smectite finest fraction within flocculated aggregates. Here, we report results from a relatively new technique, Transmission X-ray Microscopy (TXM), which makes it possible to investigate the internal structure and 3-D tomographic reconstruction of the smectite clay aggregates modified by Al13 keggin macro-molecule [Al13(O)4(OH)24(H2O)12 ]7+. Three different treatment methods were shown resulted in three different micro-structural environments of the resulting flocculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of the Bayes Factor to replace the Bayesian Information Criterion (BIC) as a criterion for speaker clustering within a speaker diarization system. The BIC is one of the most popular decision criteria used in speaker diarization systems today. However, it will be shown in this paper that the BIC is only an approximation to the Bayes factor of marginal likelihoods of the data given each hypothesis. This paper uses the Bayes factor directly as a decision criterion for speaker clustering, thus removing the error introduced by the BIC approximation. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, leading to a 14.7% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The unusual (1:1) complex ‘adduct’ salt of copper(II) with 4,5-dichlorophthalic acid (H2DCPA), having formula [Cu(H2O)4(C8H3Cl2O4) (C8H4Cl2O4)] . (C8H3Cl2O4) has been synthesized and characterized using single-crystal X-ray diffraction. Crystals are monoclinic, space group P21/c, with Z = 4 in a cell with dimensions a = 20.1376(7), b =12.8408(4) c = 12.1910(4) Å, β = 105.509(4)o. The complex is based on discrete tetragonally distorted octahedral [CuO6] coordination centres with the four water ligands occupying the square planar sites [Cu-O, 1.962(4)-1.987(4) Å] and the monodentate carboxyl-O donors of two DCPA ligand species in the axial sites. The first of these bonds [Cu-O, 2.341(4) Å] is with an oxygen of a HDCPA monoanion, the second with an oxygen of a H2DCPA acid species [Cu-O, 2.418(4) Å]. The un-coordinated ‘adduct’ molecule is a HDCPA counter anion which is strongly hydrogen-bonded to the coordinated H2DCPA ligand [O… O, 2.503(6) Å] while a number of peripheral intra- and intermolecular hydrogen-bonding interactions give a two-dimensional network structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the psychometric properties of a Chinese version of the Diabetes Coping Measure (DCM-C) scale.----- Methods: A self-administered questionnaire was completed by 205 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Confirmatory factor analysis, criterion validity, and internal consistency reliability were conducted to evaluate the psychometric properties of the DCM-C.----- Findings: Confirmatory factor analysis confirmed a four-factor structure (χ2 /df ratio=1.351, GFI=.904, CFI=.902, RMSEA=.041). The DCM-C was significantly associated with HbA1c and diabetes self-care behaviors. Internal consistency reliability of the total DCM-C scale was .74. Cronbach’s alpha coefficients for each subscale of the DCM-C ranged from .37 (tackling spirit) to .66 (diabetes integration).----- Conclusions: The DCM-C demonstrated satisfactory reliability and validity to determine the use of diabetes coping strategies. The tackling spirit dimension needs further refinement when applies this scale to Chinese populations with diabetes.----- Clinical Relevance: Healthcare providers who deal with Chinese people with diabetes can use the DCM-C to implement an early determination of diabetes coping strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The crystal structure of the modified unsymmetrically N, N'-substituted viologen chromophore, N-ethyl- N'-(2-phosphonoethyl)-4, 4'-bipyridinium dichloride 0.75 hydrate. (1) has been determined. Crystals are triclinic, space group P-1 with Z = 2 in a cell with a = 7.2550(1), b = 13.2038(5), c = 18.5752(7) Å, α = 86.495(3), β = 83.527(2), γ = 88.921(2)o. The two independent but pseudo-symmetrically related cations in the asymmetric unit form one-dimensional hydrogen-bonded chains through short homomeric phosphonic acid O-H...O links [2.455(4), 2.464(4)A] while two of the chloride anions are similarly strongly linked to phosphonic acid groups [O-H…Cl, 2.889(4), 2.896(4)Å]. The other two chloride anions together with the two water molecules of solvation (one with partial occupancy) form unusual cyclic hydrogen-bonded bis(Cl...water) dianion units which lie between the layers of bipyridylium rings of the cation chain structures with which they are weakly associated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the emergence of multi-cores into the mainstream, there is a growing need for systems to allow programmers and automated systems to reason about data dependencies and inherent parallelismin imperative object-oriented languages. In this paper we exploit the structure of object-oriented programs to abstract computational side-effects. We capture and validate these effects using a static type system. We use these as the basis of sufficient conditions for several different data and task parallelism patterns. We compliment our static type system with a lightweight runtime system to allow for parallelization in the presence of complex data flows. We have a functioning compiler and worked examples to demonstrate the practicality of our solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road curves are an important feature of road infrastructure and many serious crashes occur on road curves. In Queensland, the number of fatalities is twice as many on curves as that on straight roads. Therefore, there is a need to reduce drivers’ exposure to crash risk on road curves. Road crashes in Australia and in the Organisation for Economic Co-operation and Development(OECD) have plateaued in the last five years (2004 to 2008) and the road safety community is desperately seeking innovative interventions to reduce the number of crashes. However, designing an innovative and effective intervention may prove to be difficult as it relies on providing theoretical foundation, coherence, understanding, and structure to both the design and validation of the efficiency of the new intervention. Researchers from multiple disciplines have developed various models to determine the contributing factors for crashes on road curves with a view towards reducing the crash rate. However, most of the existing methods are based on statistical analysis of contributing factors described in government crash reports. In order to further explore the contributing factors related to crashes on road curves, this thesis designs a novel method to analyse and validate these contributing factors. The use of crash claim reports from an insurance company is proposed for analysis using data mining techniques. To the best of our knowledge, this is the first attempt to use data mining techniques to analyse crashes on road curves. Text mining technique is employed as the reports consist of thousands of textual descriptions and hence, text mining is able to identify the contributing factors. Besides identifying the contributing factors, limited studies to date have investigated the relationships between these factors, especially for crashes on road curves. Thus, this study proposed the use of the rough set analysis technique to determine these relationships. The results from this analysis are used to assess the effect of these contributing factors on crash severity. The findings obtained through the use of data mining techniques presented in this thesis, have been found to be consistent with existing identified contributing factors. Furthermore, this thesis has identified new contributing factors towards crashes and the relationships between them. A significant pattern related with crash severity is the time of the day where severe road crashes occur more frequently in the evening or night time. Tree collision is another common pattern where crashes that occur in the morning and involves hitting a tree are likely to have a higher crash severity. Another factor that influences crash severity is the age of the driver. Most age groups face a high crash severity except for drivers between 60 and 100 years old, who have the lowest crash severity. The significant relationship identified between contributing factors consists of the time of the crash, the manufactured year of the vehicle, the age of the driver and hitting a tree. Having identified new contributing factors and relationships, a validation process is carried out using a traffic simulator in order to determine their accuracy. The validation process indicates that the results are accurate. This demonstrates that data mining techniques are a powerful tool in road safety research, and can be usefully applied within the Intelligent Transport System (ITS) domain. The research presented in this thesis provides an insight into the complexity of crashes on road curves. The findings of this research have important implications for both practitioners and academics. For road safety practitioners, the results from this research illustrate practical benefits for the design of interventions for road curves that will potentially help in decreasing related injuries and fatalities. For academics, this research opens up a new research methodology to assess crash severity, related to road crashes on curves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this thesis is discretionary work effort, that is, work effort that is voluntary, is above and beyond what is minimally required or normally expected to avoid reprimand or dismissal, and is organisationally functional. Discretionary work effort is an important construct because it is known to affect individual performance as well as organisational efficiency and effectiveness. To optimise organisational performance and ensure their long term competitiveness and sustainability, firms need to be able to induce their employees to work at or near their peak level. To work at or near their peak level, individuals must be willing to supply discretionary work effort. Thus, managers need to understand the determinants of discretionary work effort. Nonetheless, despite many years of scholarly investigation across multiple disciplines, considerable debate still exists concerning why some individuals supply only minimal work effort whilst others expend effort well above and beyond what is minimally required of them (Le. they supply discretionary work effort). Even though it is well recognised that discretionary work effort is important for promoting organisational performance and effectiveness, many authors claim that too little is being done by managers to increase the discretionary work effort of their employees. In this research, I have adopted a multi-disciplinary approach towards investigating the role of monetary and non-monetary work environment characteristics in determining discretionary work effort. My central research questions were "What non-monetary work environment characteristics do employees perceive as perks (perquisites) and irks (irksome work environment characteristics)?" and "How do perks, irks and monetary rewards relate to an employee's level of discretionary work effort?" My research took a unique approach in addressing these research questions. By bringing together the economics and organisational behaviour (OB) literatures, I identified problems with the current definition and conceptualisations of the discretionary work effort construct. I then developed and empirically tested a more concise and theoretically-based definition and conceptualisation of this construct. In doing so, I disaggregated discretionary work effort to include three facets - time, intensity and direction - and empirically assessed if different classes of work environment characteristics have a differential pattern of relationships with these facets. This analysis involved a new application of a multi-disciplinary framework of human behaviour as a tool for classifying work environment characteristics and the facets of discretionary work effort. To test my model of discretionary work effort, I used a public sector context in which there has been limited systematic empirical research into work motivation. The program of research undertaken involved three separate but interrelated studies using mixed methods. Data on perks, irks, monetary rewards and discretionary work effort were gathered from employees in 12 organisations in the local government sector in Western Australia. Non-monetary work environment characteristics that should be associated with discretionary work effort were initially identified through a review of the literature. Then, a qualitative study explored what work behaviours public sector employees perceive as discretionary and what perks and irks were associated with high and low levels of discretionary work effort. Next, a quantitative study developed measures of these perks and irks. A Q-sorttype procedure and exploratory factor analysis were used to develop the perks and irks measures. Finally, a second quantitative study tested the relationships amongst perks, irks, monetary rewards and discretionary work effort. Confirmatory factor analysis was firstly used to confirm the factor structure of the measurement models. Correlation analysis, regression analysis and effect-size correlation analysis were used to test the hypothesised relationships in the proposed model of discretionary work effort. The findings confirmed five hypothesised non-monetary work environment characteristics as common perks and two of three hypothesised non-monetary work environment characteristics as common irks. Importantly, they showed that perks, irks and monetary rewards are differentially related to the different facets of discretionary work effort. The convergent and discriminant validities of the perks and irks constructs as well as the time, intensity and direction facets of discretionary work effort were generally confirmed by the research findings. This research advances the literature in several ways: (i) it draws on the Economics and OB literatures to redefine and reconceptualise the discretionary work effort construct to provide greater definitional clarity and a more complete conceptualisation of this important construct; (ii) it builds on prior research to create a more comprehensive set of perks and irks for which measures are developed; (iii) it develops and empirically tests a new motivational model of discretionary work effort that enhances our understanding of the nature and functioning of perks and irks and advances our ability to predict discretionary work effort; and (iv) it fills a substantial gap in the literature on public sector work motivation by revealing what work behaviours public sector employees perceive as discretionary and what work environment characteristics are associated with their supply of discretionary work effort. Importantly, by disaggregating discretionary work effort this research provides greater detail on how perks, irks and monetary rewards are related to the different facets of discretionary work effort. Thus, from a theoretical perspective this research also demonstrates the conceptual meaningfulness and empirical utility of investigating the different facets of discretionary work effort separately. From a practical perspective, identifying work environment factors that are associated with discretionary work effort enhances managers' capacity to tap this valuable resource. This research indicates that to maximise the potential of their human resources, managers need to address perks, irks and monetary rewards. It suggests three different mechanisms through which managers might influence discretionary work effort and points to the importance of training for both managers and non-managers in cultivating positive interpersonal relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the common factor structure of US, German, and Japanese Government bond returns. Unlike previous studies, we formally take into account the presence of country-specific factors when estimating common factors. We show that the classical approach of running a principal component analysis on a multi-country dataset of bond returns captures both local and common influences and therefore tends to pick too many factors. We conclude that US bond returns share only one common factor with German and Japanese bond returns. This single common factor is associated most notably with changes in the level of domestic term structures. We show that accounting for country-specific factors improves the performance of domestic and international hedging strategies.