948 resultados para Uniform ergodicity
Resumo:
The aim of this study was to elucidate the thermophysiological effects of wearing lightweight non-military overt and covert personal body armour (PBA) in a hot and humid environment. Eight healthy males walked on a treadmill for 120 min at 22% of their heart rate reserve in a climate chamber simulating 31 °C (60%RH) wearing either no armour (control), overt or covert PBA in addition to a security guard uniform, in a randomised controlled crossover design. No significant difference between conditions at the end of each trial was observed in core temperature, heart rate or skin temperature (P > 0.05). Covert PBA produced a significantly greater amount of body mass change (−1.81 ± 0.44%) compared to control (−1.07 ± 0.38%, P = 0.009) and overt conditions (−1.27 ± 0.44%, P = 0.025). Although a greater change in body mass was observed after the covert PBA trial; based on the physiological outcome measures recorded, the heat strain encountered while wearing lightweight, non-military overt or covert PBA was negligible compared to no PBA. Practitioner summary The wearing of bullet proof vests or body armour is a requirement of personnel engaged in a wide range of occupations including police, security, customs and even journalists in theatres of war. This randomised controlled crossover study is the first to examine the thermophysiological effects of wearing lightweight non-military overt and covert personal body armour (PBA) in a hot and humid environment. We conclude that the heat strain encountered while wearing both overt and covert lightweight, non-military PBA was negligible compared to no PBA.
Resumo:
This paper presents a numerical model for understanding particle transport and deposition in metal foam heat exchangers. Two-dimensional steady and unsteady numerical simulations of a standard single row metal foam-wrapped tube bundle are performed for different particle size distributions, i.e. uniform and normal distributions. Effects of different particle sizes and fluid inlet velocities on the overall particle transport inside and outside the foam layer are also investigated. It was noted that the simplification made in the previously-published numerical works in the literature, e.g. uniform particle deposition in the foam, is not necessarily accurate at least for the cases considered here. The results highlight the preferential particle deposition areas both along the tube walls and inside the foam using a developed particle deposition likelihood matrix. This likelihood matrix is developed based on three criteria being particle local velocity, time spent in the foam, and volume fraction. It was noted that the particles tend to deposit near both front and rear stagnation points. The former is explained by the higher momentum and direct exposure of the particles to the foam while the latter only accommodate small particles which can be entrained in the recirculation region formed behind the foam-wrapped tubes.
Resumo:
Current design rules for the member capacities of cold-formed steel columns are based on the same non-dimensional strength curve for both fixed and pinned-ended columns at ambient temperature. This research has investigated the accuracy of using current ambient temperature design rules in Australia/New Zealand (AS/NZS 4600), American (AISI S100) and European (Eurocode 3 Part 1.3) standards in determining the flexural–torsional buckling capacities of cold-formed steel columns at uniform elevated temperatures using appropriately reduced mechanical properties. It was found that these design rules accurately predicted the member capacities of pin ended lipped channel columns undergoing flexural torsional buckling at elevated temperatures. However, for fixed ended columns with warping fixity undergoing flexural–torsional buckling, the current design rules significantly underestimated the column capacities as they disregard the beneficial effect of warping fixity. This paper has therefore recommended the use of improved design rules developed for ambient temperature conditions to predict the axial compression capacities of fixed ended columns subject to flexural–torsional buckling at elevated temperatures within AS/NZS 4600 and AISI S100 design provisions. The accuracy of the proposed fire design rules was verified using finite element analysis and test results of cold-formed lipped channel columns at elevated temperatures except for low strength steel columns with intermediate slenderness whose behaviour was influenced by the increased nonlinearity in the stress–strain curves at elevated temperatures. Further research is required to include these effects within AS/NZS 4600 and AISI S100 design rules. However, Eurocode 3 Part 1.3 design rules can be used for this purpose by using suitable buckling curves as recommended in this paper.
Resumo:
Traditionally, the fire resistance rating of Light gauge steel frame (LSF) wall systems is based on approximate prescriptive methods developed using limited fire tests. These fire tests are conducted using standard fire time-temperature curve given in ISO 834. However, in recent times fire has become a major disaster in buildings due to the increase in fire loads as a result of modern furniture and lightweight construction, which make use of thermoplastics materials, synthetic foams and fabrics. Therefore a detailed research study into the performance of load bearing LSF wall systems under both standard and realistic design fires on one side was undertaken to develop improved fire design rules. This study included both full scale fire tests and numerical studies of eight different LSF wall systems conducted for both the standard fire curve and the recently developed realistic design fire curves. The use of previous fire design rules developed for LSF walls subjected to non-uniform elevated temperature distributions based on AISI design manual and Eurocode 3 Parts 1.2 and 1.3 was investigated first. New simplified fire design rules based on AS/NZS 4600, North American Specification and Eurocode 3 Part 1.3 were then proposed with suitable allowances for the interaction effects of compression and bending actions. The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated and their effects were included. A spread sheet based design tool was developed based on the new design rules to predict the failure load ratio versus time and temperature curves for varying LSF wall configurations. The accuracy of the proposed design rules was verified using the fire test and finite element analysis results for various wall configurations, steel grades, thicknesses and load ratios under both standard and realistic design fire conditions. A simplified method was also proposed to predict the fire resistance rating of LSF walls based on two sets of equations developed for the load ratio-hot flange temperature and the time-temperature relationships. This paper presents the details of this study on LSF wall systems under fire conditions and the results.
Resumo:
Cold-formed steel members are widely used in load bearing Light gauge steel frame (LSF) wall systems with plasterboard linings on both sides. However, these thin-walled steel sections heat up quickly and lose their strength under fire conditions despite the protection provided by plasterboards. Hence there is a need for simple fire design rules to predict their load capacities and fire resistance ratings. During fire events, the LSF wall studs are subjected to non-uniform temperature distributions that cause thermal bowing, neutral axis shift and magnification effects and thus resulting in a combined axial compression and bending action on the LSF wall studs. In this research a series of full scale fire tests was conducted first to evaluate the performance of LSF wall systems with eight different wall configurations under standard fire conditions. Finite element models of LSF walls were then developed, analysed under transient and steady state conditions, and validated using full scale fire tests. Using the results from fire tests and finite element analyses, a detailed investigation was undertaken into the prediction of axial compression strength and failure times of LSF wall studs in standard fires using the available fire design rules based on Australian, American and European standards. The results from both fire tests and finite element analyses were used to investigate the ability of these fire design rules to include the complex effects of non-uniform temperature distributions and their accuracy in predicting the axial compression strengths of wall studs and the failure times. Suitable modifications were then proposed to the fire design rules. This paper presents the details of this investigation into the accuracy of using currently available fire design rules of LSF walls and the results.
Resumo:
This study demonstrates a novel technique of preparing drug colloid probes to determine the adhesion force between a model drug salbutamol sulphate (SS) and the surfaces of polymer microparticles to be used as carriers for the dispersion of drug particles from dry powder inhaler (DPI) formulations. Model silica probes of approximately 4 lm size, similar to a drug particle used in DPI formulations, were coated with a saturated SS solution with the aid of capillary forces acting between the silica probe and the drug solution. The developed method of ensuring a smooth and uniform layer of SS on the silica probe was validated using X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Using the same technique, silica microspheres pre-attached on the AFM cantilever were coated with SS. The adhesion forces between the silica probe and drug coated silica (drug probe) and polymer surfaces (hydrophilic and hydrophobic) were determined. Our experimental results showed that the technique for preparing the drug probe was robust and can be used to determine the adhesion force between hydrophilic/ hydrophobic drug probe and carrier surfaces to gain a better understanding on drug carrier adhesion forces in DPI formulations.
Resumo:
Background & Research Focus Managing knowledge for innovation and organisational benefit has been extensively investigated in studies of large firms (Smith, Collins & Clark, 2005; Zucker, et al., 2007) and to a large extent there is limited research into studies of small- and medium- sized enterprises (SMEs). There are some investigations in knowledge management research on SMEs, but what remains to be seen in particular is the question of where are the potential challenges for managing knowledge more effectively within these firms? Effective knowledge management (KM) processes and systems lead to improved performance in pursuing distinct capabilities that contribute to firm-level innovation (Nassim 2009; Zucker et al. 2007; Verona and Ravasi 2003). Managing internal and external knowledge in a way that links it closely to the innovation process can assist the creation and implementation of new products and services. KM is particularly important in knowledge intensive firms where the knowledge requirements are highly specialized, diverse and often emergent. However, to a large extent the KM processes of small firms that are often the source of new knowledge and an important element of the value networks of larger companies have not been closely studied. To address this gap which is of increasing importance with the growing number of small firms, we need to further investigate knowledge management processes and the ways that firms find, capture, apply and integrate knowledge from multiple sources for their innovation process. This study builds on the previous literature and applies existing frameworks and takes the process and activity view of knowledge management as a starting point of departure (see among others Kraaijenbrink, Wijnhoven & Groen, 2007; Enberg, Lindkvist, & Tell, 2006; Lu, Wang & Mao, 2007). In this paper, it is attempted to develop a better understanding of the challenges of knowledge management within the innovation process in small knowledge-oriented firms. The paper aims to explore knowledge management processes and practices in firms that are engaged in the new product/service development programs. Consistent with the exploratory character of the study, the research question is: How is knowledge integrated, sourced and recombined from internal and external sources for innovation and new product development? Research Method The research took an exploratory case study approach and developed a theoretical framework to investigate the knowledge situation of knowledge-intensive firms. Equipped with the conceptual foundation, the research adopted a multiple case study method investigating four diverse Australian knowledge-intensive firms from IT, biotechnology, nanotechnology and biochemistry industries. The multiple case study method allowed us to document in some depth the knowledge management experience of the theses firms. Case study data were collected through a review of company published data and semi-structured interviews with managers using an interview guide to ensure uniform coverage of the research themes. This interview guide was developed following development of the framework and a review of the methodologies and issues covered by similar studies in other countries and used some questions common to these studies. It was framed to gather data around knowledge management activity within the business, focusing on the identification, acquisition and utilisation of knowledge, but collecting a range of information about subject as well. The focus of the case studies was on the use of external and internal knowledge to support their knowledge intensive products and services. Key Findings Firstly a conceptual and strategic knowledge management framework has been developed. The knowledge determinants are related to the nature of knowledge, organisational context, and mechanism of the linkages between internal and external knowledge. Overall, a number of key observations derived from this study, which demonstrated the challenges of managing knowledge and how important KM is as a management tool for innovation process in knowledge-oriented firms. To summarise, findings suggest that knowledge management process in these firms is very much project focused and not embedded within the overall organisational routines and mainly based on ad hoc and informal processes. Our findings highlighted lack of formal knowledge management process within our sampled firms. This point to the need for more specialised capabilities in knowledge management for these firms. We observed a need for an effective knowledge transfer support system which is required to facilitate knowledge sharing and particularly capturing and transferring tacit knowledge from one team members to another. In sum, our findings indicate that building effective and adaptive IT systems to manage and share knowledge in the firm is one of the biggest challenges for these small firms. Also, there is little explicit strategy in small knowledge-intensive firms that is targeted at systematic KM either at the strategic or operational level. Therefore, a strategic approach to managing knowledge for innovation as well as leadership and management are essential to achieving effective KM. In particular, research findings demonstrate that gathering tacit knowledge, internal and external to the organization, and applying processes to ensure the availability of knowledge for innovation teams, drives down the risks and cost of innovation. KM activities and tools, such as KM systems, environmental scanning, benchmarking, intranets, firm-wide databases and communities of practice to acquire knowledge and to make it accessible, were elements of KM. Practical Implications The case study method that used in this study provides practical insight into the knowledge management process within Australian knowledge-intensive firms. It also provides useful lessons which can be used by other firms in managing the knowledge more effectively in the innovation process. The findings would be helpful for small firms that may be searching for a practical method for managing and integrating their specialised knowledge. Using the results of this exploratory study and to address the challenges of knowledge management, this study proposes five practices that are discussed in the paper for managing knowledge more efficiently to improve innovation: (1) Knowledge-based firms must be strategic in knowledge management processes for innovation, (2) Leadership and management should encourage various practices for knowledge management, (3) Capturing and sharing tacit knowledge is critical and should be managed, (4)Team knowledge integration practices should be developed, (5) Knowledge management and integration through communication networks, and technology systems should be encouraged and strengthen. In sum, the main managerial contribution of the paper is the recognition of knowledge determinants and processes, and their effects on the effective knowledge management within firm. This may serve as a useful benchmark in the strategic planning of the firm as it utilises new and specialised knowledge.
Resumo:
A generalised bidding model is developed to calculate a bidder’s expected profit and auctioners expected revenue/payment for both a General Independent Value and Independent Private Value (IPV) kmth price sealed-bid auction (where the mth bidder wins at the kth bid payment) using a linear (affine) mark-up function. The Common Value (CV) assumption, and highbid and lowbid symmetric and asymmetric First Price Auctions and Second Price Auctions are included as special cases. The optimal n bidder symmetric analytical results are then provided for the uniform IPV and CV models in equilibrium. Final comments concern implications, the assumptions involved and prospects for further research.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Retrieval and Tweet Contextualization tracks). INEX 2013 was an exciting year for INEX in which we consolidated the collaboration with (other activities in) CLEF and for the second time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums. This paper gives an overview of all the INEX 2013 tracks, their aims and task, the built test-collections, and gives an initial analysis of the results.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2014 evaluation campaign, which consisted of three tracks: The Interactive Social Book Search Track investigated user information seeking behavior when interacting with various sources of information, for realistic task scenarios, and how the user interface impacts search and the search experience. The Social Book Search Track investigated the relative value of authoritative metadata and user-generated content for search and recommendation using a test collection with data from Amazon and LibraryThing, including user profiles and personal catalogues. The Tweet Contextualization Track investigated tweet contextualization, helping a user to understand a tweet by providing him with a short background summary generated from relevant Wikipedia passages aggregated into a coherent summary. INEX 2014 was an exciting year for INEX in which we for the third time ran our workshop as part of the CLEF labs. This paper gives an overview of all the INEX 2014 tracks, their aims and task, the built test-collections, the participants, and gives an initial analysis of the results.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX'12 evaluation campaign, which consisted of a five tracks: Linked Data, Relevance Feedback, Snippet Retrieval, Social Book Search, and Tweet Contextualization. INEX'12 was an exciting year for INEX in which we joined forces with CLEF and for the first time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums.
Resumo:
Two Archaean komatiitic flows, Fred’s Flow in Canada and the Murphy Well Flow in Australia, have similar thicknesses (120 and 160 m) but very different compositions and internal structures. Their contrasting differentiation profiles are keys to determine the cooling and crystallization mechanisms that operated during the eruption of Archaean ultramafic lavas. Fred’s Flow is the type example of a thick komatiitic basalt flow. It is strongly differentiated and consists of a succession of layers with contrasting textures and compositions. The layering is readily explained by the accumulation of olivine and pyroxene in a lower cumulate layer and by evolution of the liquid composition during downward growth of spinifex-textured rocks within the upper crust. The magmas that erupted to form Fred’s Flow had variable compositions, ranging from 12 to 20 wt% MgO, and phenocryst contents from 0 to 20 vol%. The flow was emplaced by two pulses. A first ~20-m-thick pulse was followed by another more voluminous but less magnesian pulse that inflated the flow to its present 120 m thickness. Following the second pulse, the flow crystallized in a closed system and differentiated into cumulates containing 30–38 wt% MgO and a residual gabbroic layer with only 6 wt% MgO. The Murphy Well Flow, in contrast, has a remarkably uniform composition throughout. It comprises a 20-m-thick upper layer of fine-grained dendritic olivine and 2–5 vol% amygdales, a 110–120 m intermediate layer of olivine porphyry and a 20–30 m basal layer of olivine orthocumulate. Throughout the flow, MgO contents vary little, from only 30 to 33 wt%, except for the slightly more magnesian basal layer (38–40 wt%). The uniform composition of the flow and dendritic olivine habits in the upper 20 m point to rapid cooling of a highly magnesian liquid with a composition like that of the bulk of the flow. Under equilibrium conditions, this liquid should have crystallized olivine with the composition Fo94.9, but the most magnesian composition measured by electron microprobe in samples from the flow is Fo92.9. To explain these features, we propose that the parental liquid contained around 32 wt% MgO and 3 wt% H2O. This liquid degassed during the eruption, creating a supercooled liquid that solidified quickly and crystallized olivine with non-equilibrium textures and compositions.
Resumo:
Spatial variation of seismic ground motions is caused by incoherence effect, wave passage, and local site conditions. This study focuses on the effects of spatial variation of earthquake ground motion on the responses of adjacent reinforced concrete (RC) frame structures. The adjacent buildings are modeled considering soil-structure interaction (SSI) so that the buildings can be interacted with each other under uniform and non-uniform ground motions. Three different site classes are used to model the soil layers of SSI system. Based on fast Fourier transformation (FFT), spatially correlated non-uniform ground motions are generated compatible with known power spectrum density function (PSDF) at different locations. Numerical analyses are carried out to investigate the displacement responses and the absolute maximum base shear forces of adjacent structures subjected to spatially varying ground motions. The results are presented in terms of related parameters affecting the structural response using three different types of soil site classes. The responses of adjacent structures have changed remarkably due to spatial variation of ground motions. The effect can be significant on rock site rather than clay site.
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
Numerical results are presented to investigate the performance of a partly-filled porous heat exchanger for waste heat recovery units. A parametric study was conducted to investigate the effects of inlet velocity and porous block height on the pressure drop of the heat exchanger. The focus of this work is on modelling the interface of a porous and non-porous region. As such, numerical simulation of the problem is conducted along with hot-wire measurements to better understand the physics of the problem. Results from the two sources are then compared to existing theoretical predictions available in the literature which are unable to predict the existence of two separation regions before and after the porous block. More interestingly, a non-uniform interface velocity was observed along the streamwise direction based on both numerical and experimental data.