304 resultados para simple perturbation theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for the development of effective business curricula that meets the needs of the marketplace has created an increase in the adoption of core competencies lists identifying appropriate graduate skills. Many organisations and tertiary institutions have individual graduate capabilities lists including skills deemed essential for success. Skills recognised as ‘critical thinking’ are popular inclusions on core competencies and graduate capability lists. While there is literature outlining ‘critical thinking’ frameworks, methods of teaching it and calls for its integration into business curricula, few studies actually identify quantifiable improvements achieved in this area. This project sought to address the development of ‘critical thinking’ skills in a management degree program by embedding a process for critical thinking within a theory unit undertaken by students early in the program. Focus groups and a student survey were used to identify issues of both content and implementation and to develop a student perspective on their needs in thinking critically. A process utilising a framework of critical thinking was integrated through a workbook of weekly case studies for group analysis, discussions and experiential exercises. The experience included formative and summative assessment. Initial results indicate a greater valuation by students of their experience in the organisation theory unit; better marks for mid semester essay assignments and higher evaluations on the university administered survey of students’ satisfaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statisticians along with other scientists have made significant computational advances that enable the estimation of formerly complex statistical models. The Bayesian inference framework combined with Markov chain Monte Carlo estimation methods such as the Gibbs sampler enable the estimation of discrete choice models such as the multinomial logit (MNL) model. MNL models are frequently applied in transportation research to model choice outcomes such as mode, destination, or route choices or to model categorical outcomes such as crash outcomes. Recent developments allow for the modification of the potentially limiting assumptions of MNL such as the independence from irrelevant alternatives (IIA) property. However, relatively little transportation-related research has focused on Bayesian MNL models, the tractability of which is of great value to researchers and practitioners alike. This paper addresses MNL model specification issues in the Bayesian framework, such as the value of including prior information on parameters, allowing for nonlinear covariate effects, and extensions to random parameter models, so changing the usual limiting IIA assumption. This paper also provides an example that demonstrates, using route-choice data, the considerable potential of the Bayesian MNL approach with many transportation applications. This paper then concludes with a discussion of the pros and cons of this Bayesian approach and identifies when its application is worthwhile

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a simple variation of the Allingham and Sandmo (1972) construct and integrates it to a dynamic general equilibrium framework with heterogeneous agents. We study an overlapping generations framework i n which agents must initially decide whether to evade taxes or not. In the event they decide to evade, they then have to decide the extent of income or wealth they wish to under-report. We find that in comparison with the basic approach, the ‘evade or not’ choice drastically reduced the extent of evasion in the economy. This outcome is the result of an anomaly intrinsic to the basic Allingham and Sandmo version of the model, which makes the evade-or-not extension a more suitable approach to modelling the issue. We also find that the basic model, and the model with and ‘evade-or-not’ choice have strikingly different political economy implications, , which suggest fruitful avenues of empirical research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper argues, somewhat along a Simmelian line, that political theory may produce practical and universal theories like those developed in theoretical physics. The reasoning behind this paper is to show that the theory of ‘basic democracy’ may be true by way of comparing it to Einstein’s Special Relativity – specifically concerning the parameters of symmetry, unification, simplicity, and utility. These parameters are what make a theory in physics as meeting them not only fits with current knowledge, but also produces paths towards testing (application). As the theory of ‘basic democracy’ may meet these same parameters, it could settle the debate concerning the definition of democracy. This will be argued firstly by discussing what the theory of ‘basic democracy’ is and why it differs from previous work; secondly by explaining the parameters chosen (as in why these and not others confirm or scuttle theories); and thirdly by comparing how Special Relativity and the theory of ‘basic democracy’ may match the parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dominant economic paradigm currently guiding industry policy making in Australia and much of the rest of the world is the neoclassical approach. Although neoclassical theories acknowledge that growth is driven by innovation, such innovation is exogenous to their standard models and hence often not explored. Instead the focus is on the allocation of scarce resources, where innovation is perceived as an external shock to the system. Indeed, analysis of innovation is largely undertaken by other disciplines, such as evolutionary economics and institutional economics. As more has become known about innovation processes, linear models, based on research and development or market demand, have been replaced by more complex interactive models which emphasise the existence of feedback loops between the actors and activities involved in the commercialisation of ideas (Manley 2003). Currently dominant among these approaches is the national or sectoral innovation system model (Breschi and Malerba 2000; Nelson 1993), which is based on the notion of increasingly open innovation systems (Chesbrough, Vanhaverbeke, and West 2008). This chapter reports on the ‘BRITE Survey’ funded by the Cooperative Research Centre for Construction Innovation which investigated the open sectoral innovation system operating in the Australian construction industry. The BRITE Survey was undertaken in 2004 and it is the largest construction innovation survey ever conducted in Australia. The results reported here give an indication of how construction innovation processes operate, as an example that should be of interest to international audiences interested in construction economics. The questionnaire was based on a broad range of indicators recommended in the OECD’s Community Innovation Survey guidelines (OECD/Eurostat 2005). Although the ABS has recently begun to undertake regular innovation surveys that include the construction industry (2006), they employ a very narrow definition of the industry and only collect very basic data compared to that provided by the BRITE Survey, which is presented in this chapter. The term ‘innovation’ is defined here as a new or significantly improved technology or organisational practice, based broadly on OECD definitions (OECD/Eurostat 2005). Innovation may be technological or organisational in nature and it may be new to the world, or just new to the industry or the business concerned. The definition thus includes the simple adoption of existing technological and organisational advancements. The survey collected information about respondents’ perceptions of innovation determinants in the industry, comprising various aspects of business strategy and business environment. It builds on a pilot innovation survey undertaken by PricewaterhouseCoopers (PWC) for the Australian Construction Industry Forum on behalf of the Australian Commonwealth Department of Industry Tourism and Resources, in 2001 (PWC 2002). The survey responds to an identified need within the Australian construction industry to have accurate and timely innovation data upon which to base effective management strategies and public policies (Focus Group 2004).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel modified theory based upon Rayleigh scattering of ultrasound from composite nanoparticles with a liquid core and solid shell. We derive closed form solutions to the scattering cross-section and have applied this model to an ultrasound contrast agent consisting of a liquid-filled core (perfluorooctyl bromide, PFOB) encapsulated by a polymer shell (poly-caprolactone, PCL). Sensitivity analysis was performed to predict the dependence of the scattering cross-section upon material and dimensional parameters. A rapid increase in the scattering cross-section was achieved by increasing the compressibility of the core, validating the incorporation of high compressibility PFOB; the compressibility of the shell had little impact on the overall scattering cross-section although a more compressible shell is desirable. Changes in the density of the shell and the core result in predicted local minima in the scattering cross-section, approximately corresponding to the PFOB-PCL contrast agent considered; hence, incorporation of a lower shell density could potentially significantly improve the scattering cross-section. A 50% reduction in shell thickness relative to external radius increased the predicted scattering cross-section by 50%. Although it has often been considered that the shell has a negative effect on the echogeneity due to its low compressibility, we have shown that it can potentially play an important role in the echogeneity of the contrast agent. The challenge for the future is to identify suitable shell and core materials that meet the predicted characteristics in order to achieve optimal echogenity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Balancing between the provision of high quality of service and running within a tight budget is one of the biggest challenges for most metro railway operators around the world. Conventionally, one possible approach for the operator to adjust the time schedule is to alter the stop time at stations, if other system constraints, such as traction equipment characteristic, are not taken into account. Yet it is not an effective, flexible and economical method because the run-time of a train simply cannot be extended without limitation, and a balance between run-time and energy consumption has to be maintained. Modification or installation of a new signalling system not only increases the capital cost, but also affects the normal train service. Therefore, in order to procure a more effective, flexible and economical means to improve the quality of service, optimisation of train performance by coasting point identification has become more attractive and popular. However, identifying the necessary starting points for coasting under the constraints of current service conditions is no simple task because train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting points and investigates the possible improvement on computation time and fitness of genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Simple Laws of Proportion was shortlisted in the 2010 John Marsden Writing Prize for Young Australian Writers. It was subsequently published online by Express Media in December, 2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plenary Session: "New Voices in Children's Literature"

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of bubble contraction in a Hele-Shaw cell is studied for the case in which the surrounding fluid is of power-law type. A small perturbation of the radially symmetric problem is first considered, focussing on the behaviour just before the bubble vanishes, it being found that for shear-thinning fluids the radially symmetric solution is stable, while for shear-thickening fluids the aspect ratio of the bubble boundary increases. The borderline (Newtonian) case considered previously is neutrally stable, the bubble boundary becoming elliptic in shape with the eccentricity of the ellipse depending on the initial data. Further light is shed on the bubble contraction problem by considering a long thin Hele-Shaw cell: for early times the leading-order behaviour is one-dimensional in this limit; however, as the bubble contracts its evolution is ultimately determined by the solution of a Wiener-Hopf problem, the transition between the long-thin limit and the extinction limit in which the bubble vanishes being described by what is in effect a similarity solution of the second kind. This same solution describes the generic (slit-like) extinction behaviour for shear-thickening fluids, the interface profiles that generalise the ellipses that characterise the Newtonian case being constructed by the Wiener-Hopf calculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study reports on the impact of a "drink driving education program" taught to grade ten high school students. The program which involves twelve lessons uses strategies based on the Ajzen and Madden theory of planned behavior. Students were trained to use alternatives to drink driving and passenger behaviors. One thousand seven hundred and seventy-four students who had been taught the program in randomly assigned control and intervention schools were followed up three years later. There had been a major reduction in drink driving behaviors in both intervention and control students. In addition to this cohort change there was a trend toward reduced drink driving in the intervention group and a significant reduction in passenger behavior in this group. Readiness to use alternatives suggested that the major impact of the program was on students who were experimenting with the behavior at the time the program was taught. The program seems to have optimized concurrent social attitude and behavior change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A consistent finding in the literature is that males report greater usage of drugs and subsequently greater amounts of drug driving. Research also suggests that vicarious influences may be more pertinent to males than to females. Utilising Stafford and Warr’s (1993) reconceptualization of deterrence theory, this study sought to determine if the relative deterrent impact of zero-tolerance drug driving laws is disparate between genders. A sample of motorists’ (N = 899) completed a self-report questionnaire assessing participants frequency of drug driving and personal and vicarious experiences with punishment and punishment avoidance. Results show that males were significantly more likely to report future intentions of drug driving. Additionally, vicarious experiences of punishment avoidance was a more influential predictor of future drug driving instances for males with personal experiences of punishment avoidance a more influential predictor for females. These findings can inform gender sensitive media campaigns and interventions for convicted drug drivers.