332 resultados para Routine formulas


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Infant caregivers in centre-based child care were videotaped as they interacted with the children during routine and non-routine activities. During a subsequent interview, the video provided a stimulus for discussion and reflection on practices. Caregivers were also asked to write about their beliefs on good practice in caring for infants. Transcripts of the interviews and the written statements were then analysed for evidence of nave and informed beliefs about caregiving. Most caregivers held nave beliefs and only one caregiver had an informed understanding of professional practice with infants. The usefulness of the analytical framework used in this research is discussed as a means for understanding caregiving practices. It has important implications for approaches to initial professional education of early childhood teachers and for professional development programmes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a need in industry for a commodity polyethylene film with controllable degradation properties that will degrade in an environmentally neutral way, for applications such as shopping bags and packaging film. Additives such as starch have been shown to accelerate the degradation of plastic films, however control of degradation is required so that the film will retain its mechanical properties during storage and use, and then degrade when no longer required. By the addition of a photocatalyst it is hoped that polymer film will breakdown with exposure to sunlight. Furthermore, it is desired that the polymer film will degrade in the dark, after a short initial exposure to sunlight. Research has been undertaken into the photo- and thermo-oxidative degradation processes of 25 ìm thick LLDPE (linear low density polyethylene) film containing titania from different manufacturers. Films were aged in a suntest or in an oven at 50 °C, and the oxidation product formation was followed using IR spectroscopy. Degussa P25, Kronos 1002, and various organic-modified and doped titanias of the types Satchleben Hombitan and Hunstsman Tioxide incorporated into LLDPE films were assessed for photoactivity. Degussa P25 was found to be the most photoactive with UVA and UVC exposure. Surface modification of titania was found to reduce photoactivity. Crystal phase is thought to be among the most important factors when assessing the photoactivity of titania as a photocatalyst for degradation. Pre-irradiation with UVA or UVC for 24 hours of the film containing 3% Degussa P25 titania prior to aging in an oven resulted in embrittlement in ca. 200 days. The multivariate data analysis technique PCA (principal component analysis) was used as an exploratory tool to investigate the IR spectral data. Oxidation products formed in similar relative concentrations across all samples, confirming that titania was catalysing the oxidation of the LLDPE film without changing the oxidation pathway. PCA was also employed to compare rates of degradation in different films. PCA enabled the discovery of water vapour trapped inside cavities formed by oxidation by titania particles. Imaging ATR/FTIR spectroscopy with high lateral resolution was used in a novel experiment to examine the heterogeneous nature of oxidation of a model polymer compound caused by the presence of titania particles. A model polymer containing Degussa P25 titania was solvent cast onto the internal reflection element of the imaging ATR/FTIR and the oxidation under UVC was examined over time. Sensitisation of 5 ìm domains by titania resulted in areas of relatively high oxidation product concentration. The suitability of transmission IR with a synchrotron light source to the study of polymer film oxidation was assessed as the Australian Synchrotron in Melbourne, Australia. Challenges such as interference fringes and poor signal-to-noise ratio need to be addressed before this can become a routine technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The billionaires of the world attract significant attention from the media and the public. The popular press is full of books selling formulas on how to become rich. Surprisingly, only a limited number of studies have explored empirically the determinants of extraordinary wealth. Using a large data set we explore whether globalization and corruption affect extreme wealth accumulation. We find evidence that an increase in globalization increases super-richness. In addition, we also find that an increase in corruption leads to an increase in the creation of super fortune. This supports the argument that in kleptocracies large sums are transferred into the hands of a small group of individuals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most costly operations encountered in pairing computations are those that take place in the full extension field Fpk . At high levels of security, the complexity of operations in Fpk dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller’s algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller’s algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller’s algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on efficient pairing implementation has focussed on reducing the loop length and on using high-degree twists. Existence of twists of degree larger than 2 is a very restrictive criterion but luckily constructions for pairing-friendly elliptic curves with such twists exist. In fact, Freeman, Scott and Teske showed in their overview paper that often the best known methods of constructing pairing-friendly elliptic curves over fields of large prime characteristic produce curves that admit twists of degree 3, 4 or 6. A few papers have presented explicit formulas for the doubling and the addition step in Miller’s algorithm, but the optimizations were all done for the Tate pairing with degree-2 twists, so the main usage of the high- degree twists remained incompatible with more efficient formulas. In this paper we present efficient formulas for curves with twists of degree 2, 3, 4 or 6. These formulas are significantly faster than their predecessors. We show how these faster formulas can be applied to Tate and ate pairing variants, thereby speeding up all practical suggestions for efficient pairing implementations over fields of large characteristic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary There are four interactions to consider between energy intake (EI) and energy expenditure (EE) in the development and treatment of obesity. (1) Does sedentariness alter levels of EI or subsequent EE? and (2) Do high levels of EI alter physical activity or exercise? (3) Do exercise-induced increases in EE drive EI upwards and undermine dietary approaches to weight management and (4) Do low levels of EI elevate or decrease EE? There is little evidence that sedentariness alters levels of EI. This lack of cross-talk between altered EE and EI appears to promote a positive EB. Lifestyle studies also suggest that a sedentary routine actually offers the opportunity for over-consumption. Substantive changes in non exercise activity thermogenesis are feasible, but not clearly demonstrated. Cross talk between elevated EE and EI is initially too weak and takes too long to activate, to seriously threaten dietary approaches to weight management. It appears that substantial fat loss is possible before intake begins to track a sustained elevation of EE. There is more evidence that low levels of EI does lower physical activity levels, in relatively lean men under conditions of acute or prolonged semi-starvation and in dieting obese subjects. During altered EB there are a number of small but significant changes in the components of EE, including (i) sleeping and basal metabolic rate, (ii) energy cost of weight change alters as weight is gained or lost, (iii) exercise efficiency, (iv) energy cost of weight bearing activities, (v) during substantive overfeeding diet composition (fat versus carbohydrate) will influence the energy cost of nutrient storage by ~ 15%. The responses (i-v) above are all “obligatory” responses. Altered EB can also stimulate facultative behavioural responses, as a consequence of cross-talk between EI and EE. Altered EB will lead to changes in the mode duration and intensity of physical activities. Feeding behaviour can also change. The degree of inter-individual variability in these responses will define the scope within which various mechanisms of EB compensation can operate. The relative importance of “obligatory” versus facultative, behavioural responses -as components of EB control- need to be defined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hollywood has dominated the global film business since the First World War. Economic formulas used by governments to assess levels of industry dominance typically measure market share to establish the degree of industry concentration. The business literature reveals that a marketing orientation strongly correlates with superior market performance and that market leaders that possess a set of six superior marketing capabilities are able to continually outperform rival firms. This paper argues that the historical evidence shows that the Hollywood Majors have consistently outperformed rival firms and rival film industries in each of those six marketing capabilities and that unless rivals develop a similarly integrated and cohesive strategic marketing management approach to the movie business and match the Major studios’ superior capabilities, then Hollywood’s dominance will continue. This paper also proposes that in cyberspace, whilst the Internet does provide a channel that democratises film distribution, the flat landscape of the world wide web means that in order to stand out from the clutter of millions of cyber-voices seeking attention, independent film companies need to possess superior strategic marketing management capabilities and develop effective e-marketing strategies to find a niche, attract a loyal online audience and prosper. However, mirroring a recent CIA report forecasting a multi-polar world economy, this paper also argues that potentially serious longer-term rivals are emerging and will increasingly take a larger slice of an expanding global box office as India, China and other major developing economies and their respective cultural channels grow and achieve economic parity with or surpass the advanced western economies. Thus, in terms of global market share over time, Hollywood’s slice of the pie will comparatively diminish in an emerging multi-polar movie business.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. OBJECTIVES: To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. METHODS: The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. SETTING AND PARTICIPANTS: A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. RESULTS: Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child's symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of atopic dermatitis by the parent, involving healthcare professionals in management, and involving the child in the management of atopic dermatitis were found. Parents' self-efficacy and outcome expectations had a significant influence on self-reported task performance. CONCLUSIONS: Findings suggest that PEMS and POEEMS are valid and reliable instruments worthy of further psychometric evaluation. Likewise, validity and reliability of PASECI was confirmed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrocarbon spills on roads are a major safety concern for the driving public and can have severe cost impacts both on pavement maintenance and to the economy through disruption to services. The time taken to clean-up spills and re-open roads in a safe driving condition is an issue of increasing concern given traffic levels on major urban arterials. Thus, the primary aim of the research was to develop a sorbent material that facilitates rapid clean-up of road spills. The methodology involved extensive research into a range of materials (organic, inorganic and synthetic sorbents), comprehensive testing in the laboratory, scale-up and field, and product design (i.e. concept to prototype). The study also applied chemometrics to provide consistent, comparative methods of sorbent evaluation and performance. In addition, sorbent materials at every stage were compared against a commercial benchmark. For the first time, the impact of diesel on asphalt pavement has been quantified and assessed in a systematic way. Contrary to conventional thinking and anecdotal observations, the study determined that the action of diesel on asphalt was quite rapid (i.e. hours rather than weeks or months). This significant finding demonstrates the need to minimise the impact of hydrocarbon spills and the potential application of the sorbent option. To better understand the adsorption phenomenon, surface characterisation techniques were applied to selected sorbent materials (i.e. sand, organo-clay and cotton fibre). Brunauer Emmett Teller (BET) and thermal analysis indicated that the main adsorption mechanism for the sorbents occurred on the external surface of the material in the diffusion region (sand and organo-clay) and/or capillaries (cotton fibre). Using environmental scanning electron microscopy (ESEM), it was observed that adsorption by the interfibre capillaries contributed to the high uptake of hydrocarbons by the cotton fibre. Understanding the adsorption mechanism for these sorbents provided some guidance and scientific basis for the selection of materials. The study determined that non-woven cotton mats were ideal sorbent materials for clean-up of hydrocarbon spills. The prototype sorbent was found to perform significantly better than the commercial benchmark, displaying the following key properties: • superior hydrocarbon pick-up from the road pavement; • high hydrocarbon retention capacity under an applied load; • adequate field skid resistance post treatment; • functional and easy to use in the field (e.g. routine handling, transportation, application and recovery); • relatively inexpensive to produce due to the use of raw cotton fibre and simple production process; • environmentally friendly (e.g. renewable materials, non-toxic to environment and operators, and biodegradable); and • rapid response time (e.g. two minutes total clean-up time compared with thirty minutes for reference sorbents). The major outcomes of the research project include: a) development of a specifically designed sorbent material suitable for cleaning up hydrocarbon spills on roads; b) submission of patent application (serial number AU2005905850) for the prototype product; and c) preparation of Commercialisation Strategy to advance the sorbent product to the next phase (i.e. R&D to product commercialisation).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the terminology of Logic programming, current search engines answer Sigma1 queries (formulas of the form where is a boolean combination of attributes). Such a query is determined by a particular sequence of keywords input by a user. In order to give more control to users, search engines will have to tackle more expressive queries, namely, Sigma2 queries (formulas of the form ). The purpose of the talk is to examine which directions could be explored in order to move towards more expressive languages, more powerful search engines, and the benefits that users should expect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge of the accuracy of dose calculations in intensity-modulated radiotherapy of the head and neck is essential for clinical confidence in these highly conformal treatments. High dose gradients are frequently placed very close to critical structures, such as the spinal cord, and good coverage of complex shaped nodal target volumes is important for long term-local control. A phantom study is presented comparing the performance of standard clinical pencil-beam and collapsed-cone dose algorithms to Monte Carlo calculation and three-dimensional gel dosimetry measurement. All calculations and measurements are normalized to the median dose in the primary planning target volume, making this a purely relative study. The phantom simulates tissue, air and bone for a typical neck section and is treated using an inverse-planned 5-field IMRT treatment, similar in character to clinically used class solutions. Results indicate that the pencil-beam algorithm fails to correctly model the relative dose distribution surrounding the air cavity, leading to an overestimate of the target coverage. The collapsed-cone and Monte Carlo results are very similar, indicating that the clinical collapsed-cone algorithm is perfectly sufficient for routine clinical use. The gel measurement shows generally good agreement with the collapsed-cone and Monte Carlo calculated dose, particularly in the spinal cord dose and nodal target coverage, thus giving greater confidence in the use of this class solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND.: Microvascular free tissue transfer has become increasingly popular in the reconstruction of head and neck defects, but it also has its disadvantages. Tissue engineering allows the generation of neo-tissue for implantation, but these tissues are often avascular. We propose to combine tissue-engineering techniques together with flap prefabrication techniques to generate a prefabricated vascularized soft tissue flap. METHODS: Human dermal fibroblasts (HDFs) labeled with fluorescein diacetate were static seeded onto polylactic-co-glycolic acid-collagen (PLGA-c) mesh. Controls were plain PLGA-c mesh. The femoral artery and vein of the nude rat was ligated and used as a vascular carrier for the constructs. After 4 weeks of implantation, the constructs were assessed by gross morphology, routine histology, Masson trichrome, and cell viability determined by green fluorescence. RESULTS: All the constructs maintained their initial shape and dimensions. Angiogenesis was evident in all the constructs with neo-capillary formation within the PLGA-c mesh seen. HDFs proliferated and filled the interyarn spaces of the PLGA-c mesh, while unseeded PLGA-c mesh remained relatively acellular. Cell tracer study indicated that the seeded HDFs remained viable and closely associated to remaining PLGA-c fibers. Collagen formation was more abundant in the constructs seeded with HDFs. CONCLUSIONS: PLGA-c, enveloped by a cell sheet composed of fibroblasts, can serve as a suitable scaffold for generation of a soft tissue flap. A ligated arteriovenous pedicle can serve as a vascular carrier for the generation of a tissue engineered vascularized flap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gibson and Tarrant discuss the range of inter-dependant factors needed to manage organisational resilience. Over the last few years there has been considerable interest in the idea of resilience across all areas of society. Like any new area or field this has produced a vast array of definitions, processes, management systems and measurement tools which together have clouded the concept of resilience. Many of us have forgotten that ultimately resilience is not just about ‘bouncing back from adversity’ but is more broadly concerned with adaptive capacity and how we better understand and address uncertainty in our internal and external environments. The basis of organisational resilience is a fundamental understanding and treatment of risk, particularly non-routine or disruption related risk. This paper presents a number of conceptual models of organisational resilience that we have developed to demonstrate the range of inter-dependant factors that need to be considered in the management of such risk. These conceptual models illustrate that effective resilience is built upon a range of different strategies that enhance both ‘hard’ and ‘soft’ organisational capabilities . They emphasise the concept that there is no quick fix, no single process, management system or software application that will create resilience.