398 resultados para DIFFUSION-CONTROLLED CURRENT
Resumo:
Electrochemical processes in mesoporous TiO2-Nafion thin films deposited on indium tin oxide (ITO) electrodes are inherently complex and affected by capacitance, Ohmic iR-drop, RC-time constant phenomena, and by potential and pH-dependent conductivity. In this study, large-amplitude sinusoidally modulated voltammetry (LASMV) is employed to provide access to almost purely Faradaic-based current data from second harmonic components, as well as capacitance and potential domain information from the fundamental harmonic for mesoporous TiO2-Nafion film electrodes. The LASMV response has been investigated with and without an immobilized one-electron redox system, ferrocenylmethyltrimethylammonium+. Results clearly demonstrate that the electron transfer associated with the immobilized ferrocene derivative follows two independent pathways i) electron hopping within the Nafion network and ii) conduction through the TiO2 backbone. The pH effect on the voltammetric response for the TiO2 reduction pathway (ii) can be clearly identified in the 2nd harmonic LASMV response with the diffusion controlled ferrocene response (i) acting as a pH independent reference. Application of second harmonic data derived from LASMV measurement, because of the minimal contribution from capacitance currents, may lead to reference-free pH sensing with systems like that found for ferrocene derivatives.
Resumo:
The radiation chemistry and the grafting of a fluoropolymer, poly(tetrafluoroethylene-coperfluoropropyl vinyl ether) (PFA), was investigated with the aim of developing a highly stable grafted support for use in solid phase organic chemistry (SPOC). A radiation-induced grafting method was used whereby the PFA was exposed to ionizing radiation to form free radicals capable of initiating graft copolymerization of styrene. To fully investigate this process, both the radiation chemistry of PFA and the grafting of styrene to PFA were examined. Radiation alone was found to have a detrimental effect on PFA when irradiated at 303 K. This was evident from the loss in the mechanical properties due to chain scission reactions. This meant that when radiation was used for the grafting reactions, the total radiation dose needed to be kept as low as possible. The radicals produced when PFA was exposed to radiation were examined using electron spin resonance spectroscopy. Both main-chain (–CF2–C.F–CF2-) and end-chain (–CF2–C.F2) radicals were identified. The stability of the majority of the main-chain radicals when the polymer was heated above the glass transition temperature suggested that they were present mainly in the crystalline regions of the polymer, while the end-chain radicals were predominately located in the amorphous regions. The radical yield at 77 K was lower than the radical yield at 303 K suggesting that cage recombination at low temperatures inhibited free radicals from stabilizing. High-speed MAS 19F NMR was used to identify the non-volatile products after irradiation of PFA over a wide temperature range. The major products observed over the irradiation temperature 303 to 633 K included new saturated chain ends, short fluoromethyl side chains in both the amorphous and crystalline regions, and long branch points. The proportion of the radiolytic products shifted from mainly chain scission products at low irradiation temperatures to extensive branching at higher irradiation temperatures. Calculations of G values revealed that net crosslinking only occurred when PFA was irradiated in the melt. Minor products after irradiation at elevated temperatures included internal and terminal double bonds and CF3 groups adjacent to double bonds. The volatile products after irradiation at 303 K included tetrafluoromethane (CF4) and oxygen-containing species from loss of the perfluoropropyl ether side chains of PFA as identified by mass spectrometry and FTIR spectroscopy. The chemical changes induced by radiation exposure were accompanied by changes in the thermal properties of the polymer. Changes in the crystallinity and thermal stability of PFA after irradiation were examined using DSC and TGA techniques. The equilibrium melting temperature of untreated PFA was 599 K as determined using a method of extrapolation of the melting temperatures of imperfectly formed crystals. After low temperature irradiation, radiation- induced crystallization was prevalent due to scission of strained tie molecules, loss of perfluoropropyl ether side chains, and lowering of the molecular weight which promoted chain alignment and hence higher crystallinity. After irradiation at high temperatures, the presence of short and long branches hindered crystallization, lowering the overall crystallinity. The thermal stability of the PFA decreased with increasing radiation dose and temperature due to the introduction of defect groups. Styrene was graft copolymerized to PFA using -radiation as the initiation source with the aim of preparing a graft copolymer suitable as a support for SPOC. Various grafting conditions were studied, such as the total dose, dose rate, solvent effects and addition of nitroxides to create “living” graft chains. The effect of dose rate was examined when grafting styrene vapour to PFA using the simultaneous grafting method. The initial rate of grafting was found to be independent of the dose rate which implied that the reaction was diffusion controlled. When the styrene was dissolved in various solvents for the grafting reaction, the graft yield was strongly dependent of the type and concentration of the solvent used. The greatest graft yield was observed when the solvent swelled the grafted layers and the substrate. Microprobe Raman spectroscopy was used to map the penetration of the graft into the substrate. The grafted layer was found to contain both poly(styrene) (PS) and PFA and became thicker with increasing radiation dose and graft yield which showed that grafting began at the surface and progressively penetrated the substrate as the grafted layer was swollen. The molecular weight of the grafted PS was estimated by measuring the molecular weight of the non-covalently bonded homopolymer formed in the grafted layers using SEC. The molecular weight of the occluded homopolymer was an order of magnitude greater than the free homopolymer formed in the surrounding solution suggesting that the high viscosity in the grafted regions led to long PS grafts. When a nitroxide mediated free radical polymerization was used, grafting occurred within the substrate and not on the surface due to diffusion of styrene into the substrate at the high temperatures needed for the reaction to proceed. Loading tests were used to measure the capacity of the PS graft to be functionialized with aminomethyl groups then further derivatized. These loading tests showed that samples grafted in a solution of styrene and methanol had superior loading capacity over samples graft using other solvents due to the shallow penetration and hence better accessibility of the graft when methanol was used as a solvent.
Resumo:
The microstructures of the quenched melts of samples of Y123 and Y123+15-20 mol% Y211 with PtO2 and CeO2 additives have been examined with optical microscopy, Scanning Electron Microscopy (SEM), Energy Dispersive X-ray Spectrometry (EDS) and X-ray Diffractometry (XRD). Significantly higher temperatures are required for the formation of dendritic or lamellar eutectic patterns throughout the samples with PtO2 and CeO2 additives as compared to samples without additives. The BaCuO2 (BCl) phase appears first in solid form and, instead of rapidly melting, is slowly dissolving or decomposing in the oxygen depleted melt. PtO2 and CeO2 additives slow down or shift to higher temperatures the dissolution or decomposition process of BCl. A larger fraction of BCl in solid form explains why samples with additives have higher viscosities and hence lower diffusivities than samples without additives. There is also a reduction in the Y solubility to about half the value in samples without additives. The mechanism that limits the Ostwald ripening of the Y211 particles is correlated to the morphology of the quenched partial melt. It is diffusion controlled for a finely mixed morphology and interface-controlled when the melt quenches into dendritic or lamellar eutectic patterns. The change in the morphology of the Y211 particles from blocky to acicular is related to an equivalent undercooling of the Y-Ba-Cu-O partial melt, particularly through the crystallization of BCl.
Resumo:
The formation of Ge quantum dot arrays by deposition from a low-temperature plasma environment is investigated by kinetic Monte Carlo numerical simulation. It is demonstrated that balancing of the Ge influx from the plasma against surface diffusion provides an effective control of the surface processes and can result in the formation of very small densely packed quantum dots. In the supply-controlled mode, a continuous layer is formed which is then followed by the usual Stranski-Krastanow fragmentation with a nanocluster size of 10 nm. In the diffusion-controlled mode, with the oversupply relative to the surface diffusion rate, nanoclusters with a characteristic size of 3 nm are formed. Higher temperatures change the mode to supply controlled and thus encourage formation of the continuous layer that then fragments into an array of large size. The use of a high rate of deposition, easily accessible using plasma techniques, changes the mode to diffusion controlled and thus encourages formation of a dense array of small nanoislands.
Resumo:
The single electron transfer-nitroxide radical coupling (SET-NRC) reaction has been used to produce multiblock polymers with high molecular weights in under 3 min at 50◦C by coupling a difunctional telechelic polystyrene (Br-PSTY-Br)with a dinitroxide. The well known combination of dimethyl sulfoxide as solvent and Me6TREN as ligand facilitated the in situ disproportionation of CuIBr to the highly active nascent Cu0 species. This SET reaction allowed polymeric radicals to be rapidly formed from their corresponding halide end-groups. Trapping of these carbon-centred radicals at close to diffusion controlled rates by dinitroxides resulted in high-molecular-weight multiblock polymers. Our results showed that the disproportionation of CuI was critical in obtaining these ultrafast reactions, and confirmed that activation was primarily through Cu0. We took advantage of the reversibility of the NRC reaction at elevated temperatures to decouple the multiblock back to the original PSTY building block through capping the chain-ends with mono-functional nitroxides. These alkoxyamine end-groups were further exchanged with an alkyne mono-functional nitroxide (TEMPO–≡) and ‘clicked’ by a CuI-catalyzed azide/alkyne cycloaddition (CuAAC) reaction with N3–PSTY–N3 to reform the multiblocks. This final ‘click’ reaction, even after the consecutive decoupling and nitroxide-exchange reactions, still produced high molecular-weight multiblocks efficiently. These SET-NRC reactions would have ideal applications in re-usable plastics and possibly as self-healing materials.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
A Positive Buck- Boost (PBB) converter is a known DC-DC converter that can operate in step up and step down modes. Unlike Buck, Boost, and Inverting Buck Boost converters, the inductor current of a PBB can be controlled independently of its voltage conversion ratio. In other words, the inductor of PBB can be utilised as an energy storage unit in addition to its main function of energy transfer. In this paper, the capability of PBB to store energy has been utilised to achieve robustness against input voltage fluctuations and output current changes. The control strategy has been developed to keep accuracy, affordability, and simplicity acceptable. To improve the efficiency of the system a Smart Load Controller (SLC) has been suggested. Applying SLC extra current storage occurs when there is sudden loads change otherwise little extra current is stored.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
AIMS: Alcohol use disorders and depression co-occur frequently and are associated with poorer outcomes than when either condition occurs alone. The present study (Depression and Alcohol Integrated and Single-focused Interventions; DAISI) aimed to compare the effectiveness of brief intervention, single-focused and integrated psychological interventions for treatment of coexisting depression and alcohol use problems. METHODS: Participants (n = 284) with current depressive symptoms and hazardous alcohol use were assessed and randomly allocated to one of four individually delivered interventions: (i) a brief intervention only (single 90-minute session) with an integrated focus on depression and alcohol, or followed by a further nine 1-hour sessions with (ii) an alcohol focus; (iii) a depression focus; or (iv) an integrated focus. Follow-up assessments occurred 18 weeks after baseline. RESULTS: Compared with the brief intervention, 10 sessions were associated with greater reductions in average drinks per week, average drinking days per week and maximum consumption on 1 day. No difference in duration of treatment was found for depression outcomes. Compared with single-focused interventions, integrated treatment was associated with a greater reduction in drinking days and level of depression. For men, the alcohol-focused rather than depression-focused intervention was associated with a greater reduction in average drinks per day and drinks per week and an increased level of general functioning. Women showed greater improvements on each of these variables when they received depression-focused rather than alcohol-focused treatment. CONCLUSIONS: Integrated treatment may be superior to single-focused treatment for coexisting depression and alcohol problems, at least in the short term. Gender differences between single-focused depression and alcohol treatments warrant further study.
Resumo:
The technological environment in which Australian SMEs operate can be best described as dynamic and vital. The rate of technological change provides the SME owner/manager a complex and challenging operational context. Wireless applications are being developed that provide mobile devices with Internet content and e-business services. In Australia the adoption of e-commerce by large organisations has been relatively high, however the same cannot be said for SMEs where adoption has been slower than other developed countries. In contrast however mobile telephone adoption and diffusion is relatively high by SMEs. This exploratory study identifies attitudes, perceptions and issues for mobile data technologies by regional SME owner/managers across a range of industry sectors. The major issues include the sector the firm belongs to, the current adoption status of the firm, the level of mistrust of the IT industry, the cost of the technologies and the applications and attributes of the technologies.
Resumo:
Stream ciphers are encryption algorithms used for ensuring the privacy of digital telecommunications. They have been widely used for encrypting military communications, satellite communications, pay TV encryption and for voice encryption of both fixed lined and wireless networks. The current multi year European project eSTREAM, which aims to select stream ciphers suitable for widespread adoptation, reflects the importance of this area of research. Stream ciphers consist of a keystream generator and an output function. Keystream generators produce a sequence that appears to be random, which is combined with the plaintext message using the output function. Most commonly, the output function is binary addition modulo two. Cryptanalysis of these ciphers focuses largely on analysis of the keystream generators and of relationships between the generator and the keystream it produces. Linear feedback shift registers are widely used components in building keystream generators, as the sequences they produce are well understood. Many types of attack have been proposed for breaking various LFSR based stream ciphers. A recent attack type is known as an algebraic attack. Algebraic attacks transform the problem of recovering the key into a problem of solving multivariate system of equations, which eventually recover the internal state bits or the key bits. This type of attack has been shown to be effective on a number of regularly clocked LFSR based stream ciphers. In this thesis, algebraic attacks are extended to a number of well known stream ciphers where at least one LFSR in the system is irregularly clocked. Applying algebriac attacks to these ciphers has only been discussed previously in the open literature for LILI-128. In this thesis, algebraic attacks are first applied to keystream generators using stop-and go clocking. Four ciphers belonging to this group are investigated: the Beth-Piper stop-and-go generator, the alternating step generator, the Gollmann cascade generator and the eSTREAM candidate: the Pomaranch cipher. It is shown that algebraic attacks are very effective on the first three of these ciphers. Although no effective algebraic attack was found for Pomaranch, the algebraic analysis lead to some interesting findings including weaknesses that may be exploited in future attacks. Algebraic attacks are then applied to keystream generators using (p; q) clocking. Two well known examples of such ciphers, the step1/step2 generator and the self decimated generator are investigated. Algebraic attacks are shown to be very powerful attack in recovering the internal state of these generators. A more complex clocking mechanism than either stop-and-go or the (p; q) clocking keystream generators is known as mutual clock control. In mutual clock control generators, the LFSRs control the clocking of each other. Four well known stream ciphers belonging to this group are investigated with respect to algebraic attacks: the Bilateral-stop-and-go generator, A5/1 stream cipher, Alpha 1 stream cipher, and the more recent eSTREAM proposal, the MICKEY stream ciphers. Some theoretical results with regards to the complexity of algebraic attacks on these ciphers are presented. The algebraic analysis of these ciphers showed that generally, it is hard to generate the system of equations required for an algebraic attack on these ciphers. As the algebraic attack could not be applied directly on these ciphers, a different approach was used, namely guessing some bits of the internal state, in order to reduce the degree of the equations. Finally, an algebraic attack on Alpha 1 that requires only 128 bits of keystream to recover the 128 internal state bits is presented. An essential process associated with stream cipher proposals is key initialization. Many recently proposed stream ciphers use an algorithm to initialize the large internal state with a smaller key and possibly publicly known initialization vectors. The effect of key initialization on the performance of algebraic attacks is also investigated in this thesis. The relationships between the two have not been investigated before in the open literature. The investigation is conducted on Trivium and Grain-128, two eSTREAM ciphers. It is shown that the key initialization process has an effect on the success of algebraic attacks, unlike other conventional attacks. In particular, the key initialization process allows an attacker to firstly generate a small number of equations of low degree and then perform an algebraic attack using multiple keystreams. The effect of the number of iterations performed during key initialization is investigated. It is shown that both the number of iterations and the maximum number of initialization vectors to be used with one key should be carefully chosen. Some experimental results on Trivium and Grain-128 are then presented. Finally, the security with respect to algebraic attacks of the well known LILI family of stream ciphers, including the unbroken LILI-II, is investigated. These are irregularly clock- controlled nonlinear filtered generators. While the structure is defined for the LILI family, a particular paramater choice defines a specific instance. Two well known such instances are LILI-128 and LILI-II. The security of these and other instances is investigated to identify which instances are vulnerable to algebraic attacks. The feasibility of recovering the key bits using algebraic attacks is then investigated for both LILI- 128 and LILI-II. Algebraic attacks which recover the internal state with less effort than exhaustive key search are possible for LILI-128 but not for LILI-II. Given the internal state at some point in time, the feasibility of recovering the key bits is also investigated, showing that the parameters used in the key initialization process, if poorly chosen, can lead to a key recovery using algebraic attacks.
Resumo:
Background: An estimated 285 million people worldwide have diabetes and its prevalence is predicted to increase to 439 million by 2030. For the year 2010, it is estimated that 3.96 million excess deaths in the age group 20-79 years are attributable to diabetes around the world. Self-management is recognised as an integral part of diabetes care. This paper describes the protocol of a randomised controlled trial of an automated interactive telephone system aiming to improve the uptake and maintenance of essential diabetes self-management behaviours. ---------- Methods/Design: A total of 340 individuals with type 2 diabetes will be randomised, either to the routine care arm, or to the intervention arm in which participants receive the Telephone-Linked Care (TLC) Diabetes program in addition to their routine care. The intervention requires the participants to telephone the TLC Diabetes phone system weekly for 6 months. They receive the study handbook and a glucose meter linked to a data uploading device. The TLC system consists of a computer with software designed to provide monitoring, tailored feedback and education on key aspects of diabetes self-management, based on answers voiced or entered during the current or previous conversations. Data collection is conducted at baseline (Time 1), 6-month follow-up (Time 2), and 12-month follow-up (Time 3). The primary outcomes are glycaemic control (HbA1c) and quality of life (Short Form-36 Health Survey version 2). Secondary outcomes include anthropometric measures, blood pressure, blood lipid profile, psychosocial measures as well as measures of diet, physical activity, blood glucose monitoring, foot care and medication taking. Information on utilisation of healthcare services including hospital admissions, medication use and costs is collected. An economic evaluation is also planned.---------- Discussion: Outcomes will provide evidence concerning the efficacy of a telephone-linked care intervention for self-management of diabetes. Furthermore, the study will provide insight into the potential for more widespread uptake of automated telehealth interventions, globally.
Resumo:
Increasingly, large amounts of public and private money are being invested in education and as a result, schools are becoming more accountable to stakeholders for this financial input. In terms of the curriculum, governments worldwide are frequently tying school funding to students‟ and schools‟ academic performances, which are monitored through high-stakes testing programs. To accommodate the resultant pressures from these testing initiatives, many principals are re-focussing their school‟s curriculum on the testing requirements. Such a re-focussing, which was examined critically in this thesis, constituted an externally facilitated rapid approach to curriculum change. In line with previously enacted change theories and recommendations from these, curriculum change in schools has tended to be a fairly slow, considered, collaborative process that is facilitated internally by a deputy-principal (curriculum). However, theoretically based research has shown that such a process has often proved to be difficult and very rarely successful. The present study reports and theorises the experiences of an externally facilitated process that emerged from a practitioner model of change. This case study of the development of the controlled rapid approach to curriculum change began by establishing the reasons three principals initiated curriculum change and why they then engaged an outsider to facilitate the process. It also examined this particular change process from the perspectives of the research participants. The investigation led to the revision of the practitioner model as used in the three schools and challenged the current thinking about the process of school curriculum change. The thesis aims to offer principals and the wider education community an alternative model for consideration when undertaking curriculum change. Finally, the thesis warns that, in the longer term, the application of study‟s revised model (the Controlled Rapid Approach to Curriculum Change [CRACC] Model) may have less then desirable educational consequences.
Resumo:
In many product categories of durable goods such as TV, PC, and DVD player, the largest component of sales is generated by consumers replacing existing units. Aggregate sales models proposed by diffusion of innovation researchers for the replacement component of sales have incorporated several different replacement distributions such as Rayleigh, Weibull, Truncated Normal and Gamma. Although these alternative replacement distributions have been tested using both time series sales data and individual-level actuarial “life-tables” of replacement ages, there is no census on which distributions are more appropriate to model replacement behaviour. In the current study we are motivated to develop a new “modified gamma” distribution by two reasons. First we recognise that replacements have two fundamentally different drivers – those forced by failure and early, discretionary replacements. The replacement distribution for each of these drivers is expected to be quite different. Second, we observed a poor fit of other distributions to out empirical data. We conducted a survey of 8,077 households to empirically examine models of replacement sales for six electronic consumer durables – TVs, VCRs, DVD players, digital cameras, personal and notebook computers. This data allows us to construct individual-level “life-tables” for replacement ages. We demonstrate the new modified gamma model fits the empirical data better than existing models for all six products using both a primary and a hold-out sample.