943 resultados para semiarid typical grassland
Resumo:
Boundaries are an important field of study because they mediate almost every aspect of organizational life. They are becoming increasingly more important as organizations change more frequently and yet, despite the endemic use of the boundary metaphor in common organizational parlance, they are poorly understood. Organizational boundaries are under-theorized and researchers in related fields often simply assume their existence, without defining them. The literature on organizational boundaries is fragmented with no unifying theoretical basis. As a result, when it is recognized that an organizational boundary is "dysfunctional". there is little recourse to models on which to base remediating action. This research sets out to develop just such a theoretical model and is guided by the general question: "What is the nature of organizational boundaries?" It is argued that organizational boundaries can be conceptualised through elements of both social structure and of social process. Elements of structure include objects, coupling, properties and identity. Social processes include objectification, identification, interaction and emergence. All of these elements are integrated by a core category, or basic social process, called boundary weaving. An organizational boundary is a complex system of objects and emergent properties that are woven together by people as they interact together, objectifying the world around them, identifying with these objects and creating couplings of varying strength and polarity as well as their own fragmented identity. Organizational boundaries are characterised by the multiplicity of interconnections, a particular domain of objects, varying levels of embodiment and patterns of interaction. The theory developed in this research emerged from an exploratory, qualitative research design employing grounded theory methodology. The field data was collected from the training headquarters of the New Zealand Army using semi-structured interviews and follow up observations. The unit of analysis is an organizational boundary. Only one research context was used because of the richness and multiplicity of organizational boundaries that were present. The model arose, grounded in the data collected, through a process of theoretical memoing and constant comparative analysis. Academic literature was used as a source of data to aid theory development and the saturation of some central categories. The final theory is classified as middle range, being substantive rather than formal, and is generalizable across medium to large organizations in low-context societies. The main limitation of the research arose from the breadth of the research with multiple lines of inquiry spanning several academic disciplines, with some relevant areas such as the role of identity and complexity being addressed at a necessarily high level. The organizational boundary theory developed by this research replaces the typology approaches, typical of previous theory on organizational boundaries and reconceptualises the nature of groups in organizations as well as the role of "boundary spanners". It also has implications for any theory that relies on the concept of boundaries, such as general systems theory. The main contribution of this research is the development of a holistic model of organizational boundaries including an explanation of the multiplicity of boundaries . no organization has a single definable boundary. A significant aspect of this contribution is the integration of aspects of complexity theory and identity theory to explain the emergence of higher-order properties of organizational boundaries and of organizational identity. The core category of "boundary weaving". is a powerful new metaphor that significantly reconceptualises the way organizational boundaries may be understood in organizations. It invokes secondary metaphors such as the weaving of an organization's "boundary fabric". and provides managers with other metaphorical perspectives, such as the management of boundary friction, boundary tension, boundary permeability and boundary stability. Opportunities for future research reside in formalising and testing the theory as well as developing analytical tools that would enable managers in organizations to apply the theory in practice.
Resumo:
Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.
Resumo:
Zeolite-based technology can provide a cost effective solution for stormwater treatment for the removal of toxic heavy metals under increasing demand of safe water from alternative sources. This paper reviews the currently available knowledge relating to the effect of properties of zeolites such as pore size, surface area and Si:Al ratio and the physico-chemical conditions of the system such as pH, temperature, initial metal concentration and zeolite concentration on heavy metal removal performance. The primary aims are, to consolidate available knowledge and identify knowledge gaps. It was established that an in-depth understanding of operational issues such as, diffusion of metal ions into the zeolite pore structure, pore clogging, zeolite surface coverage by particulates in stormwater as well as the effect of pH on stormwater quality in the presence of zeolites is essential for developing a zeolite-based technology for the treatment of polluted stormwater. The optimum zeolite concentration to treat typical volumes of stormwater and initial heavy metal concentrations in stormwater should also be considered as operational issues in this regard. Additionally, leaching of aluminium and sodium ions from the zeolite structure to solution were identified as key issues requiring further research in the effort to develop cost effective solutions for the removal of heavy metals from stormwater.
Resumo:
Context: Parliamentary committees established in Westminster parliaments, such as Queensland, provide a cross-party structure that enables them to recommend policy and legislative changes that may otherwise be difficult for one party to recommend. The overall parliamentary committee process tends to be more cooperative and less adversarial than the main chamber of parliament and, as a result, this process permits parliamentary committees to make recommendations more on the available research evidence and less on political or party considerations. Objectives: This paper considers the contributions that parliamentary committees in Queensland have made in the past in the areas of road safety, drug use as well as organ and tissue donation. The paper also discusses the importance of researchers actively engaging with parliamentary committees to ensure the best evidence based policy outcomes. Key messages: In the past, parliamentary committees have successfully facilitated important safety changes with many committee recommendations based on research results. In order to maximise the benefits of the parliamentary committee process it is essential that researchers inform committees about their work and become key stakeholders in the inquiry process. Researchers can keep committees informed by making submissions to their inquiries, responding to requests for information and appearing as witnesses at public hearings. Researchers should emphasise the key findings and implications of their research as well as considering the jurisdictional implications and political consequences. It is important that researchers understand the differences between lobbying and providing informed recommendations when interacting with committees. Discussion and conclusions: Parliamentary committees in Queensland have successfully assisted in the introduction of evidence based policy and legislation. In order to present best practice recommendations, committees rely on the evidence presented to them including the results of researchers. Actively engaging with parliamentary committees will help researchers to turn their results into practice with a corresponding decrease in injuries and fatalities. Developing an understanding of parliamentary committees, and the typical inquiry process used by these committees, will help researchers to present their research results in a manner that will encourage the adoption of their ideas by parliamentary committees, the presentation of these results as recommendations within the report and the subsequent enactment of the committee’s recommendations by the government.
Resumo:
Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.
Resumo:
In recent years, the effect of ions and ultrafine particles on ambient air quality and human health has been well documented, however, knowledge about their sources, concentrations and interactions within different types of urban environments remains limited. This thesis presents the results of numerous field studies aimed at quantifying variations in ion concentration with distance from the source, as well as identifying the dynamics of the particle ionisation processes which lead to the formation of charged particles in the air. In order to select the most appropriate measurement instruments and locations for the studies, a literature review was also conducted on studies that reported ion and ultrafine particle emissions from different sources in a typical urban environment. The initial study involved laboratory experiments on the attachment of ions to aerosols, so as to gain a better understanding of the interaction between ions and particles. This study determined the efficiency of corona ions at charging and removing particles from the air, as a function of different particle number and ion concentrations. The results showed that particle number loss was directly proportional to particle charge concentration, and that higher small ion concentrations led to higher particle deposition rates in all size ranges investigated. Nanoparticles were also observed to decrease with increasing particle charge concentration, due to their higher Brownian mobility and subsequent attachment to charged particles. Given that corona discharge from high voltage powerlines is considered one of the major ion sources in urban areas, a detailed study was then conducted under three parallel overhead powerlines, with a steady wind blowing in a perpendicular direction to the lines. The results showed that large sections of the lines did not produce any corona at all, while strong positive emissions were observed from discrete components such as a particular set of spacers on one of the lines. Measurements were also conducted at eight upwind and downwind points perpendicular to the powerlines, spanning a total distance of about 160m. The maximum positive small and large ion concentrations, and DC electric field were observed at a point 20 m downwind from the lines, with median values of 4.4×103 cm-3, 1.3×103 cm-3 and 530 V m-1, respectively. It was estimated that, at this point, less than 7% of the total number of particles was charged. The electrical parameters decreased steadily with increasing downwind distance from the lines but remained significantly higher than background levels at the limit of the measurements. Moreover, vehicles are one of the most prevalent ion and particle emitting sources in urban environments, and therefore, experiments were also conducted behind a motor vehicle exhaust pipe and near busy motorways, with the aim of quantifying small ion and particle charge concentration, as well as their distribution as a function of distance from the source. The study found that approximately equal numbers of positive and negative ions were observed in the vehicle exhaust plume, as well as near motorways, of which heavy duty vehicles were believed to be the main contributor. In addition, cluster ion concentration was observed to decrease rapidly within the first 10-15 m from the road and ion-ion recombination and ion-aerosol attachment were the most likely cause of ion depletion, rather than dilution and turbulence related processes. In addition to the above-mentioned dominant ion sources, other sources also exist within urban environments where intensive human activities take place. In this part of the study, airborne concentrations of small ions, particles and net particle charge were measured at 32 different outdoor sites in and around Brisbane, Australia, which were classified into seven different groups as follows: park, woodland, city centre, residential, freeway, powerlines and power substation. Whilst the study confirmed that powerlines, power substations and freeways were the main ion sources in an urban environment, it also suggested that not all powerlines emitted ions, only those with discrete corona discharge points. In addition to the main ion sources, higher ion concentrations were also observed environments affected by vehicle traffic and human activities, such as the city centre and residential areas. A considerable number of ions were also observed in a woodland area and it is still unclear if they were emitted directly from the trees, or if they originated from some other local source. Overall, it was found that different types of environments had different types of ion sources, which could be classified as unipolar or bipolar particle sources, as well as ion sources that co-exist with particle sources. In general, fewer small ions were observed at sites with co-existing sources, however particle charge was often higher due to the effect of ion-particle attachment. In summary, this study quantified ion concentrations in typical urban environments, identified major charge sources in urban areas, and determined the spatial dispersion of ions as a function of distance from the source, as well as their controlling factors. The study also presented ion-aerosol attachment efficiencies under high ion concentration conditions, both in the laboratory and in real outdoor environments. The outcomes of these studies addressed the aims of this work and advanced understanding of the charge status of aerosols in the urban environment.
Resumo:
Photochemistry has made significant contributions to our understanding of many important natural processes as well as the scientific discoveries of the man-made world. The measurements from such studies are often complex and may require advanced data interpretation with the use of multivariate or chemometrics methods. In general, such methods have been applied successfully for data display, classification, multivariate curve resolution and prediction in analytical chemistry, environmental chemistry, engineering, medical research and industry. However, in photochemistry, by comparison, applications of such multivariate approaches were found to be less frequent although a variety of methods have been used, especially with spectroscopic photochemical applications. The methods include Principal Component Analysis (PCA; data display), Partial Least Squares (PLS; prediction), Artificial Neural Networks (ANN; prediction) and several models for multivariate curve resolution related to Parallel Factor Analysis (PARAFAC; decomposition of complex responses). Applications of such methods are discussed in this overview and typical examples include photodegradation of herbicides, prediction of antibiotics in human fluids (fluorescence spectroscopy), non-destructive in- and on-line monitoring (near infrared spectroscopy) and fast-time resolution of spectroscopic signals from photochemical reactions. It is also quite clear from the literature that the scope of spectroscopic photochemistry was enhanced by the application of chemometrics. To highlight and encourage further applications of chemometrics in photochemistry, several additional chemometrics approaches are discussed using data collected by the authors. The use of a PCA biplot is illustrated with an analysis of a matrix containing data on the performance of photocatalysts developed for water splitting and hydrogen production. In addition, the applications of the Multi-Criteria Decision Making (MCDM) ranking methods and Fuzzy Clustering are demonstrated with an analysis of water quality data matrix. Other examples of topics include the application of simultaneous kinetic spectroscopic methods for prediction of pesticides, and the use of response fingerprinting approach for classification of medicinal preparations. In general, the overview endeavours to emphasise the advantages of chemometrics' interpretation of multivariate photochemical data, and an Appendix of references and summaries of common and less usual chemometrics methods noted in this work, is provided. Crown Copyright © 2010.
Resumo:
Partition of heavy metals between particulate and dissolve fraction of stormwater primarily depends on the adsorption characteristics of solids particles. Moreover, the bioavailability of heavy metals is also influenced by the adsorption behaviour of solids. However, due to the lack of fundamental knowledge in relation to the heavy metals adsorption processes of road deposited solids, the effectiveness of stormwater management strategies can be limited. The research study focused on the investigation of the physical and chemical parameters of solids on urban road surfaces and, more specifically, on heavy metal adsorption to solids. Due to the complex nature of heavy metal interaction with solids, a substantial database was generated through a series of field investigations and laboratory experiments. The study sites for the build-up pollutant sample collection were selected from four urbanised suburbs located in a major river catchment. Sixteen road sites were selected from these suburbs and represented typical industrial, commercial and residential land uses. Build-up pollutants were collected using a wet and dry vacuum collection technique which was specially designed to improve fine particle collection. Roadside soil samples were also collected from each suburb for comparison with the road surface solids. The collected build-up solids samples were separated into four particle size ranges and tested for a range of physical and chemical parameters. The solids build-up on road surfaces contained a high fraction (70%) of particles smaller than 150ìm, which are favourable for heavy metal adsorption. These solids particles predominantly consist of soil derived minerals which included quartz, albite, microcline, muscovite and chlorite. Additionally, a high percentage of amorphous content was also identified in road deposited solids. In comparing the mineralogical data of surrounding soil and road deposited solids, it was found that about 30% of the solids consisted of particles generated from traffic related activities on road surfaces. Significant difference in mineralogical composition was noted in different particle sizes of build-up solids. Fine solids particles (<150ìm) consisted of a clayey matrix and high amorphous content (in the region of 40%) while coarse particles (>150ìm) consisted of a sandy matrix at all study sites, with about 60% quartz content. Due to these differences in mineralogical components, particles larger than and smaller than 150ìm had significant differences in their specific surface area (SSA) and effective cation exchange capacity (ECEC). These parameters, in turn, exert a significant influence on heavy metal adsorption. Consequently, heavy metal content in >150ìm particles was lower than in the case of fine particles. The particle size range <75ìm had the highest heavy metal content, corresponding with its high clay forming minerals, high organic matter and low quartz content which increased the SSA, ECEC and the presence of Fe, Al and Mn oxides. The clay forming minerals, high organic matter and Fe, Al and Mn oxides create distinct groups of charge sites on solids surfaces and exhibit different adsorption mechanisms and bond strength, between heavy metal elements and charge sites. Therefore, the predominance of these factors in different particle sizes leads to different heavy metal adsorption characteristics. Heavy metals show preference for association with clay forming minerals in fine solids particles, whilst in coarse particles heavy metals preferentially associate with organic matter. Although heavy metal adsorption to amorphous material is very low, the heavy metals embedded in traffic related materials have a potential impact on stormwater quality.Adsorption of heavy metals is not confined to an individual type of charge site in solids, whereas specific heavy metal elements show preference for adsorption to several different types of charge sites in solids. This is attributed to the dearth of preferred binding sites and the inability to reach the preferred binding sites due to competition between different heavy metal species. This confirms that heavy metal adsorption is significantly influenced by the physical and chemical parameters of solids that lead to a heterogeneity of surface charge sites. The research study highlighted the importance of removal of solids particles from stormwater runoff before they enter into receiving waters to reduce the potential risk posed by the bioavailability of heavy metals. The bioavailability of heavy metals not only results from the easily mobile fraction bound to the solids particles, but can also occur as a result of the dissolution of other forms of bonds by chemical changes in stormwater or microbial activity. Due to the diversity in the composition of the different particle sizes of solids and the characteristics and amount of charge sites on the particle surfaces, investigations using bulk solids are not adequate to gain an understanding of the heavy metal adsorption processes of solids particles. Therefore, the investigation of different particle size ranges is recommended for enhancing stormwater quality management practices.
Resumo:
Based on the AFM-bending experiments, a molecular dynamics (MD) bending simulation model is established which could accurately account for the full spectrum of the mechanical properties of NWs in a double clamped beam configuration, ranging from elasticity to plasticity and failure. It is found that, loading rate exerts significant influence to the mechanical behaviours of nanowires (NWs). Specifically, a loading rate lower than 10 m/s is found reasonable for a homogonous bending deformation. Both loading rate and potential between the tip and the NW are found to play an important role in the adhesive phenomenon. The force versus displacement (F-d) curve from MD simulation is highly consistent in shapes with that from experiments. Symmetrical F-d curves during loading and unloading processes are observed, which reveal the linear-elastic and non-elastic bending deformation of NWs. The typical bending induced tensile-compressive features are observed. Meanwhile, the simulation results are excellently fitted by the classical Euler-Bernoulli beam theory with axial effect. It is concluded that, axial tensile force becomes crucial in bending deformation when the beam size is down to nanoscale for double clamped NWs. In addition, we find shorter NWs will have an earlier yielding and a larger yielding force. Mechanical properties (Young’s modulus & yield strength) obtained from both bending and tensile deformations are found comparable with each other. Specifically, the modulus is essentially similar under these two loading methods, while the yield strength during bending is observed larger than that during tension.
Resumo:
Key establishment is a crucial cryptographic primitive for building secure communication channels between two parties in a network. It has been studied extensively in theory and widely deployed in practice. In the research literature a typical protocol in the public-key setting aims for key secrecy and mutual authentication. However, there are many important practical scenarios where mutual authentication is undesirable, such as in anonymity networks like Tor, or is difficult to achieve due to insufficient public-key infrastructure at the user level, as is the case on the Internet today. In this work we are concerned with the scenario where two parties establish a private shared session key, but only one party authenticates to the other; in fact, the unauthenticated party may wish to have strong anonymity guarantees. We present a desirable set of security, authentication, and anonymity goals for this setting and develop a model which captures these properties. Our approach allows for clients to choose among different levels of authentication. We also describe an attack on a previous protocol of Øverlier and Syverson, and present a new, efficient key exchange protocol that provides one-way authentication and anonymity.
Resumo:
In this video, a couple sits on a couch slowly breaking up. A typical shot/reverse-shot filmic structure is offset as the sound and image goes out of synch. At different times, it becomes so out of synch that they mouth each other’s words. This work engages with the signifying processes of romantic narratives. It emphasizes disruption and discontinuity as fundamental and generative operations in making meaning. Extending on post-structural and deconstructionist ideas, this work emphasizes the constructed nature of representations of heterosexual relationships. It draws attention to the gaps, slippages and fragments that pervade signifying acts.
Resumo:
Much has been said and documented about the key role that reflection can play in the ongoing development of e-portfolios, particularly e-portfolios utilised for teaching and learning. A review of e-portfolio platforms reveals that a designated space for documenting and collating personal reflections is a typical design feature of both open source and commercial off-the-shelf software. Further investigation of tools within e-portfolio systems for facilitating reflection reveals that, apart from enabling personal journalism through blogs or other writing, scaffolding tools that encourage the actual process of reflection are under-developed. Investigation of a number of prominent e-portfolio projects also reveals that reflection, while presented as critically important, is often viewed as an activity that takes place after a learning activity or experience and not intrinsic to it. This paper assumes an alternative, richer conception of reflection: a process integral to a wide range of activities associated with learning, such as inquiry, communication, editing, analysis and evaluation. Such a conception is consistent with the literature associated with ‘communities of practice’, which is replete with insight into ‘learning through doing’, and with a ‘whole minded’ approach to inquiry. Thus, graduates who are ‘reflective practitioners’ who integrate reflection into their learning will have more to offer a prospective employer than graduates who have adopted an episodic approach to reflection. So, what kinds of tools might facilitate integrated reflection? This paper outlines a number of possibilities for consideration and development. Such tools do not have to be embedded within e-portfolio systems, although there are benefits in doing so. In order to inform future design of e-portfolio systems this paper presents a faceted model of knowledge creation that depicts an ‘ecology of knowing’ in which interaction with, and the production of, learning content is deepened through the construction of well-formed questions of that content. In particular, questions that are initiated by ‘why’ are explored because they are distinguished from the other ‘journalist’ questions (who, what, when, where, and where) in that answers to them demand explanative, as opposed to descriptive, content. They require a rationale. Although why questions do not belong to any one genre and are not simple to classify — responses can contain motivational, conditional, causal, and/or existential content — they do make a difference in the acquisition of understanding. The development of scaffolding that builds on why-questioning to enrich learning is the motivation behind the research that has informed this paper.
Resumo:
The increasing stock of aging office buildings will see a significant growth in retrofitting projects in Australian capital cities. Stakeholders of refitting works will also need to take on the sustainability challenge and realize tangible outcomes through project delivery. Traditionally, decision making for aged buildings, when facing the alternatives, is typically economically driven and on ad hoc basis. This leads to the tendency to either delay refitting for as long as possible thus causing building conditions to deteriorate, or simply demolish and rebuild with unjust financial burden. The technologies involved are often limited to typical strip-clean and repartition with dry walls and office cubicles. Changing business operational patterns, the efficiency of office space, and the demand on improved workplace environment, will need more innovative and intelligent approaches to refurbishing office buildings. For example, such projects may need to respond to political, social, environmental and financial implications. There is a need for the total consideration of buildings structural assessment, modeling of operating and maintenance costs, new architectural and engineering designs that maximise the utility of the existing structure and resulting productivity improvement, specific construction management procedures including procurement methods, work flow and scheduling and occupational health and safety. Recycling potential and conformance to codes may be other major issues. This paper introduces examples of Australian research projects which provided a more holistic approach to the decision making of refurbishing office space, using appropriate building technologies and products, assessment of residual service life, floor space optimisation and project procurement in order to bring about sustainable outcomes. The paper also discusses a specific case study on critical factors that influence key building components for these projects and issues for integrated decision support when dealing with the refurbishment, and indeed the “re-life”, of office buildings.
Resumo:
We describe the population pharmacokinetics of an acepromazine (ACP) metabolite (2-(1-hydroxyethyl)promazine) (HEPS) in horses for the estimation of likely detection times in plasma and urine. Acepromazine (30 mg) was administered to 12 horses, and blood and urine samples were taken at frequent intervals for chemical analysis. A Bayesian hierarchical model was fitted to describe concentration-time data and cumulative urine amounts for HEPS. The metabolite HEPS was modelled separately from the parent ACP as the half-life of the parent was considerably less than that of the metabolite. The clearance ($Cl/F_{PM}$) and volume of distribution ($V/F_{PM}$), scaled by the fraction of parent converted to metabolite, were estimated as 769 L/h and 6874 L, respectively. For a typical horse in the study, after receiving 30 mg of ACP, the upper limit of the detection time was 35 hours in plasma and 100 hours in urine, assuming an arbitrary limit of detection of 1 $\mu$g/L, and a small ($\approx 0.01$) probability of detection. The model derived allowed the probability of detection to be estimated at the population level. This analysis was conducted on data collected from only 12 horses, but we assume that this is representative of the wider population.
Resumo:
Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study.