956 resultados para Madelung constant
Resumo:
This paper presents a new multi-output DC/DC converter topology that has step-up and step-down conversion capabilities. In this topology, several output voltages can be generated which can be used in different applications such as multilevel converters with diode-clamped topology or power supplies with several voltage levels. Steady state and dynamic equations of the proposed multi-output converter have been developed, that can be used for steady state and transient analysis. Two control techniques have been proposed for this topology based on constant and dynamic hysteresis band height control to address different applications. Simulations have been performed for different operating modes and load conditions to verify the proposed topology and its control technique. Additionally, a laboratory prototype is designed and implemented to verify the simulation results.
Resumo:
The critical impact of innovation on national and the global economies has been discussed at length in the literature. Economic development requires the diffusion of innovations into markets. It has long been recognised that economic growth and development depends upon a constant stream of innovations. Governments have been keenly aware of the need to ensure this flow does not dry to a trickle and have introduced many and varied industry policies and interventions to assist in seeding, supporting and diffusing innovations. In Australia, as in many countries, Government support for the transfer of knowledge especially from publicly funded research has resulted in the creation of knowledge exchange intermediaries. These intermediaries are themselves service organisations, seeking innovative service offerings for their markets. The choice for most intermediaries is generally a dichotomous one, between market-pull and technology-push knowledge exchange programmes. In this article, we undertake a case analysis of one such innovative intermediary and its flagship programme. We then compare this case with other successful intermediaries in Europe. We put forward a research proposition that the design of intermediary programmes must match the service type they offer. That is, market-pull programmes require market-pull design, in close collaboration with industry, whereas technology programmes can be problem-solving innovations where demand is latent. The discussion reflects the need for an evolution in knowledge transfer policies and programmes beyond the first generation ushered in with the US Bayh-Dole Act (1980) and Stevenson-Wydler Act (1984). The data analysed is a case study comparison of market-pull and technology-push programmes, focusing on primary and secondary socio-economic benefits (using both Australian and international comparisons).
Resumo:
The main contribution of this paper is decomposition/separation of the compositie induction motors load from measurement at a system bus. In power system transmission buses load is represented by static and dynamic loads. The induction motor is considered as the main dynamic loads and in the practice for major transmission buses there will be many and various induction motors contributing. Particularly at an industrial bus most of the load is dynamic types. Rather than traing to extract models of many machines this paper seeks to identify three groups of induction motors to represent the dynamic loads. Three groups of induction motors used to characterize the load. These are the small groups (4kw to 11kw), the medium groups (15kw to 180kw) and the large groups (above 630kw). At first these groups with different percentage contribution of each group is composite. After that from the composite models, each motor percentage contribution is decomposed by using the least square algorithms. In power system commercial and the residential buses static loads percentage is higher than the dynamic loads percentage. To apply this theory to other types of buses such as residential and commerical it is good practice to represent the total load as a combination of composite motor loads, constant impedence loads and constant power loads. To validate the theory, the 24hrs of Sydney West data is decomposed according to the three groups of motor models.
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.
Resumo:
Cohesion as a term connotes attraction, unity, and commonness amongst discrete entities. Considering cohesion as a concept is timely with the recent rise of network culture, which comes with both subtle and radical changes in how people connect with, position themselves in relation to, and understand other constituents of society (cf. Varnelis; Castells; Jenkins et al.). Such dis- and inter-connections signify an imminent and immanent epistemological challenge we must confront: how can we understand inherently multi-faceted subjects, components of which are in constant transformation? For researchers, disciplinary complexity is one of the main implications of this situation. While disciplinary integration may be an effective or vital component in pursuit of knowledge (cf. Nicolescu) it may also impart significant conceptual and pragmatic conflicts. What are possible ways to coalesce multiple dimensions of reality that can lead to conceptually cohesive and useful knowledge production? This issue of M/C Journal attempts to answer this question by looking at different perspectives on the notion of cohesion across topical and disciplinary boundaries.
Resumo:
Although the "slow" phase of pulmonary oxygen uptake (Vo2) appears to represent energetic processes in contracting muscle, electromyographic evidence tends not to support this. The present study assessed normalized integrated electromyographic (NIEMG) activity in eight muscles that act about the hip, knee and ankle during 8 min of moderate (
Resumo:
We investigated the relative importance of vision and proprioception in estimating target and hand locations in a dynamic environment. Subjects performed a position estimation task in which a target moved horizontally on a screen at a constant velocity and then disappeared. They were asked to estimate the position of the invisible target under two conditions: passively observing and manually tracking. The tracking trials included three visual conditions with a cursor representing the hand position: always visible, disappearing simultaneously with target disappearance, and always invisible. The target’s invisible displacement was systematically underestimated during passive observation. In active conditions, tracking with the visible cursor significantly decreased the extent of underestimation. Tracking of the invisible target became much more accurate under this condition and was not affected by cursor disappearance. In a second experiment, subjects were asked to judge the position of their unseen hand instead of the target during tracking movements. Invisible hand displacements were also underestimated when compared with the actual displacement. Continuous or brief presentation of the cursor reduced the extent of underestimation. These results suggest that vision–proprioception interactions are critical for representing exact target–hand spatial relationships, and that such sensorimotor representation of hand kinematics serves a cognitive function in predicting target position. We propose a hypothesis that the central nervous system can utilize information derived from proprioception and/or efference copy for sensorimotor prediction of dynamic target and hand positions, but that effective use of this information for conscious estimation requires that it be presented in a form that corresponds to that used for the estimations.
Resumo:
Objective: To investigate the acute effects of isolated eccentric and concentric calf muscle exercise on Achilles tendon sagittal thickness. ---------- Design: Within-subject, counterbalanced, mixed design. ---------- Setting: Institutional. ---------- Participants: 11 healthy, recreationally active male adults. ---------- Interventions: Participants performed an exercise protocol, which involved isolated eccentric loading of the Achilles tendon of a single limb and isolated concentric loading of the contralateral, both with the addition of 20% bodyweight. ---------- Main outcome measurements: Sagittal sonograms were acquired prior to, immediately following and 3, 6, 12 and 24 h after exercise. Tendon thickness was measured 2 cm proximal to the superior aspect of the calcaneus. ---------- Results: Both loading conditions resulted in an immediate decrease in normalised Achilles tendon thickness. Eccentric loading induced a significantly greater decrease than concentric loading despite a similar impulse (−0.21 vs −0.05, p<0.05). Post-exercise, eccentrically loaded tendons recovered exponentially, with a recovery time constant of 2.5 h. The same exponential function did not adequately model changes in tendon thickness resulting from concentric loading. Even so, recovery pathways subsequent to the 3 h time point were comparable. Regardless of the exercise protocol, full tendon thickness recovery was not observed until 24 h. ---------- Conclusions: Eccentric loading invokes a greater reduction in Achilles tendon thickness immediately after exercise but appears to recover fully in a similar time frame to concentric loading.
Resumo:
The call to innovate is ubiquitous across the Australian educational policy context. The claims of innovative practices and environments that occur frequently in university mission statements, strategic plans and marketing literature suggest that this exhortation to innovate appears to have been taken up enthusiastically by the university sector. Throughout the history of universities, a range of reported deficiencies of higher education have worked to produce a notion of crisis. At present, it would seem that innovation is positioned as the solution to the notion of crisis. This thesis is an inquiry into how the insistence on innovation works to both enable and constrain teaching and learning practices in Australian universities. Alongside the interplay between innovation and crisis is the link between resistance and innovation, a link which remains largely unproblematized in the scholarly literature. This thesis works to locate and unsettle understandings of a relationship between innovation and Australian higher education. The aim of this inquiry is to generate new understandings of what counts as innovation within this context and how innovation is enacted. The thesis draws on a number of postmodernist theorists, whose works have informed firstly the research method, and then the analysis and findings. Firstly, there is an assumption that power is capillary and works through discourse to enact power relations which shape certain truths (Foucault, 1990). Secondly, this research scrutinised language practices which frame the capacity for individuals to act, alongside the language practices which encourage an individual to adopt certain attitudes and actions as one’s own (Foucault, 1988). Thirdly, innovation talk is read in this thesis as an example of needs talk, that is, as a medium through which what is considered domestic, political or economic is made and contested (Fraser, 1989). Fourthly, relationships between and within discourses were identified and analysed beyond cause and effect descriptions, and more productively considered to be in a constant state of becoming (Deleuze, 1987). Finally, the use of ironic research methods assisted in producing alternate configurations of innovation talk which are useful and new (Rorty, 1989). The theoretical assumptions which underpin this thesis inform a document analysis methodology, used to examine how certain texts work to shape the ways in which innovation is constructed. The data consisted of three Federal higher education funding policies selected on the rationale that these documents, as opposed to state or locally based policy and legislation, represent the only shared policy context for all Australian universities. The analysis first provided a modernist reading of the three documents, and this was followed by postmodernist readings of these same policy documents. The modernist reading worked to locate and describe the current truths about innovation. The historical context in which the policy was produced as well as the textual features of the document itself were important to this reading. In the first modernist reading, the binaries involved in producing proper and improper notions of innovation were described and analysed. In the process of the modernist analysis and the subsequent location of binary organisation, a number of conceptual collisions were identified, and these sites of struggle were revisited, through the application of a postmodernist reading. By applying the theories of Rorty (1989) and Fraser (1989) it became possible to not treat these sites as contradictory and requiring resolution, but rather as spaces in which binary tensions are necessary and productive. This postmodernist reading constructed new spaces for refusing and resisting dominant discourses of innovation which value only certain kinds of teaching and learning practices. By exploring a number of ironic language practices found within the policies, this thesis proposes an alternative way of thinking about what counts as innovation and how it happens. The new readings of innovation made possible through the work of this thesis were in response to a suite of enduring, inter-related questions – what counts as innovation?, who or what supports innovation?, how does innovation occur?, and who are the innovators?. The truths presented in response to these questions were treated as the language practices which constitute a dominant discourse of innovation talk. The collisions that occur within these truths were the contested sites which were of most interest for the analysis. The thesis concludes by presenting a theoretical blueprint which works to shift the boundaries of what counts as innovation and how it happens in a manner which is productive, inclusive and powerful. This blueprint forms the foundation upon which a number of recommendations are made for both my own professional practice and broader contexts. In keeping with the conceptual tone of this study, these recommendations are a suite of new questions which focus attention on the boundaries of innovation talk as an attempt to re-configure what is valued about teaching and learning at university.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.
Resumo:
Hirst and Patching's second edition of Journalism Ethics: Arguments and Cases provides a fully updated exploration of the theory and practice of ethics in journalism. The authors situate modern ethical dilemmas in their social and historical context, which encourages students to think critically about ethics across the study and practice of journalism. Using a unique political economy approach, the text provides students with a theoretical and philosophical understanding of the major ethical dilemmas in journalism today. It commences with a newly recast discussion of theoretical frameworks, which explains the complex concepts of ethics in clear and comprehensive terms. It then examines the 'fault lines' in modern journalism, such as the constant conflict between the public service role of the media, and a journalist's commercial imperative to make a profit. All chapters have been updated with new examples, and many new cases demonstrating the book's theoretical underpinnings have been drawn from 'yesterday's headlines'. These familiar cases encourage student engagement and classroom discussion, and archived cases will still be available to students on an Online Resource Centre. Expanded coverage of the 'War on Terror', issues of deception within journalism, and infotainment and digital technology is included.
Resumo:
On-axis monochromatic higher-order aberrations increase with age. Few studies have been made of peripheral refraction along the horizontal meridian of older eyes, and none of their off-axis higher-order aberrations. We measured wave aberrations over the central 42°x32° visual field for a 5mm pupil in 10 young and 7 older emmetropes. Patterns of peripheral refraction were similar in the two groups. Coma increased linearly with field angle at a significantly higher rate in older than in young emmetropes (−0.018±0.007 versus −0.006±0.002 µm/deg). Spherical aberration was almost constant over the measured field in both age groups and mean values across the field were significantly higher in older than in young emmetropes (+0.08±0.05 versus +0.02±0.04 µm). Total root-mean-square and higher-order aberrations increased more rapidly with field angle in the older emmetropes. However, the limits to monochromatic peripheral retinal image quality are largely determined by the second-order aberrations, which do not change markedly with age, and under normal conditions the relative importance of the increased higher-order aberrations in older eyes is lessened by the reduction in pupil diameter with age. Therefore it is unlikely that peripheral visual performance deficits observed in normal older individuals are primarily attributable to the increased impact of higher-order aberration.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.
Resumo:
Dynamic and controlled rate thermal analysis (CRTA) has been used to characterise alunites of formula [M(Al)3(SO4)2(OH)6 ] where M+ is the cations K+, Na+ or NH4+. Thermal decomposition occurs in a series of steps. (a) dehydration, (b) well defined dehydroxylation and (c) desulphation. CRTA offers a better resolution and a more detailed interpretation of water formation processes via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of water formation reveal the subtle nature of dehydration and dehydroxylation.