360 resultados para Pecking order theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 1990 European Community was taken by surprise, by the urgency of demands from the newly-elected Eastern European governments to become member countries. Those governments were honouring the mass social movement of the streets, the year before, demanding free elections and a liberal economic system associated with “Europe”. The mass movement had actually been accompanied by much activity within institutional politics, in Western Europe, the former “satellite” states, the Soviet Union and the United States, to set up new structures – with German reunification and an expanded EC as the centre-piece. This paper draws on the writer’s doctoral dissertation on mass media in the collapse of the Eastern bloc, focused on the Berlin Wall – documenting both public protests and institutional negotiations. For example the writer as a correspondent in Europe from that time, recounts interventions of the German Chancellor, Helmut Kohl, at a European summit in Paris nine days after the “Wall”, and separate negotiations with the French President, Francois Mitterrand -- on the reunification, and EU monetary union after 1992. Through such processes, the “European idea” would receive fresh impetus, though the EU which eventuated, came with many altered expectations. It is argued here that as a result of the shock of 1989, a “social” Europe can be seen emerging, as a shared experience of daily life -- especially among people born during the last two decades of European consolidation. The paper draws on the author’s major research, in four parts: (1) Field observation from the strategic vantage point of a news correspondent. This includes a treatment of evidence at the time, of the wishes and intentions of the mass public (including the unexpected drive to join the European Community), and those of governments, (e.g. thoughts of a “Tienanmen Square solution” in East Berlin, versus the non-intervention policies of the Soviet leader, Mikhail Gorbachev). (2) A review of coverage of the crisis of 1989 by major news media outlets, treated as a history of the process. (3) As a comparison, and a test of accuracy and analysis; a review of conventional histories of the crisis appearing a decade later.(4) A further review, and test, provided by journalists responsible for the coverage of the time, as reflection on practice – obtained from semi-structured interviews.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A consistent finding in the literature is that males report greater usage of drugs and subsequently greater amounts of drug driving. Research also suggests that vicarious influences may be more pertinent to males than to females. Utilising Stafford and Warr’s (1993) reconceptualization of deterrence theory, this study sought to determine if the relative deterrent impact of zero-tolerance drug driving laws is disparate between genders. A sample of motorists’ (N = 899) completed a self-report questionnaire assessing participants frequency of drug driving and personal and vicarious experiences with punishment and punishment avoidance. Results show that males were significantly more likely to report future intentions of drug driving. Additionally, vicarious experiences of punishment avoidance was a more influential predictor of future drug driving instances for males with personal experiences of punishment avoidance a more influential predictor for females. These findings can inform gender sensitive media campaigns and interventions for convicted drug drivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last three years, in our Early Algebra Thinking Project, we have been studying Years 3 to 5 students’ ability to generalise in a variety of situations, namely, compensation principles in computation, the balance principle in equivalence and equations, change and inverse change rules with function machines, and pattern rules with growing patterns. In these studies, we have attempted to involve a variety of models and representations and to build students’ abilities to switch between them (in line with the theories of Dreyfus, 1991, and Duval, 1999). The results have shown the negative effect of closure on generalisation in symbolic representations, the predominance of single variance generalisation over covariant generalisation in tabular representations, and the reduced ability to readily identify commonalities and relationships in enactive and iconic representations. This chapter uses the results to explore the interrelation between generalisation and verbal and visual comprehension of context. The studies evidence the importance of understanding and communicating aspects of representational forms which allowed commonalities to be seen across or between representations. Finally the chapter explores the implications of the studies for a theory that describes a growth in integration of models and representations that leads to generalisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structures of two polymorphs of the anhydrous cocrystal adduct of bis(quinolinium-2-carboxylate) DL-malic acid, one triclinic the other monoclinic and disordered, have been determined at 200 K. Crystals of the triclinic polymorph 1 have space group P-1, with Z = 1 in a cell with dimensions a = 4.4854(4), b = 9.8914(7), c = 12.4670(8)Å, α = 79.671(5), β = 83.094(6), γ = 88.745(6)deg. Crystals of the monoclinic polymorph 2 have space group P21/c, with Z = 2 in a cell with dimensions a = 13.3640(4), b = 4.4237(12), c = 18.4182(5)Å, β = 100.782(3)deg. Both structures comprise centrosymmetric cyclic hydrogen-bonded quinolinic acid zwitterion dimers [graph set R2/2(10)] and 50% disordered malic acid molecules which lie across crystallographic inversion centres. However, the oxygen atoms of the malic acid carboxylic groups in 2 are 50% rotationally disordered whereas in 1 these are ordered. There are similar primary malic acid carboxyl O-H...quinaldic acid hydrogen-bonding chain interactions in each polymorph, extended into two-dimensional structures but in l this involves centrosymmetric cyclic head-to-head malic acid hydroxyl-carboxyl O-H...O interactions [graph set R2/2(10)] whereas in 2 the links are through single hydroxy-carboxyl hydrogen bonds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study tested the utility of an extended version of the theory of planned behaviour that included a measure of planning, in the prediction of eating foods low in saturated fats among adults diagnosed with Type 2 diabetes and/or cardiovascular disease. Participants (N = 184) completed questionnaires assessing standard theory of planned behaviour measures (attitude, subjective norm, and perceived behavioural control) and the additional volitional variable of planning in relation to eating foods low in saturated fats. Self-report consumption of foods low insaturated fats was assessed 1 month later. In partial support of the theory of planned behaviour, results indicated that attitude and subjective norm predicted intentions to eat foods low in saturated fats and intentions and perceived behavioural control predicted the consumption of foods low in saturated fats. As an additional variable, planning predicted the consumption of foods low in saturated fats directly and also mediated the intention–behaviour and perceived behavioural control–behaviour relationships, suggesting an important role for planning as a post-intentional construct determining healthy eating choices. Suggestions are offered for interventions designed to improve adherence to healthy eating recommendations for people diagnosed with these chronic conditions with a specific emphasis on the steps and activities that are required to promote a healthier lifestyle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the content, origin and development of Tendering Theory as a theory of price determination. It demonstrates how tendering theory determines market prices and how it is different from game and decision theories, and that in the tendering process, with non-cooperative, simultaneous, single sealed bids with individual private valuations, extensive public information, a large number of bidders and a long sequence of tendering occasions, there develops a competitive equilibrium. The development of a competitive equilibrium means that the concept of the tender as the sum of a valuation and a strategy, which is at the core of tendering theory, cannot be supported and that there are serious empirical, theoretical and methodological inconsistencies in the theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idealised theory for the quasi-static flow of granular materials which satisfy the Coulomb-Mohr hypothesis is considered. This theory arises in the limit that the angle of internal friction approaches $\pi/2$, and accordingly these materials may be referred to as being `highly frictional'. In this limit, the stress field for both two-dimensional and axially symmetric flows may be formulated in terms of a single nonlinear second order partial differential equation for the stress angle. To obtain an accompanying velocity field, a flow rule must be employed. Assuming the non-dilatant double-shearing flow rule, a further partial differential equation may be derived in each case, this time for the streamfunction. Using Lie symmetry methods, a complete set of group-invariant solutions is derived for both systems, and through this process new exact solutions are constructed. Only a limited number of exact solutions for gravity driven granular flows are known, so these results are potentially important in many practical applications. The problem of mass flow through a two-dimensional wedge hopper is examined as an illustration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Under certain circumstances, an industrial hopper which operates under the "funnel-flow" regime can be converted to the "mass-flow" regime with the addition of a flow-corrective insert. This paper is concerned with calculating granular flow patterns near the outlet of hoppers that incorporate a particular type of insert, the cone-in-cone insert. The flow is considered to be quasi-static, and governed by the Coulomb-Mohr yield condition together with the non-dilatant double-shearing theory. In two dimensions, the hoppers are wedge-shaped, and as such the formulation for the wedge-in-wedge hopper also includes the case of asymmetrical hoppers. A perturbation approach, valid for high angles of internal friction, is used for both two-dimensional and axially symmetric flows, with analytic results possible for both leading order and correction terms. This perturbation scheme is compared with numerical solutions to the governing equations, and is shown to work very well for angles of internal friction in excess of 45 degree.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this chapter we present a case study set in Beloi, a fishing village located on Ataúro Island, 30 km across the sea from Díli, capital of Timor-Leste (East-Timor). We explore the tension between tourism development, food security and marine conservation in a developing country context. In order to better understand the relationships between the social, ecological and economic issues that arise in tourism planning we use an approach and associated methodology based on storytelling, complexity theory and concept mapping. Through testing scenarios with this methodology we hope to evaluate which trade-offs are acceptable to local people in return for the hoped-for economic boost from increased tourist visitation and associated developments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A State-based industry in Australia is in the process of developing a programme to prevent AOD impairment in the workplace. The objective of this study was to determine whether the Theory of Planned Behaviour can help explain the mechanisms by which behaviour change occurs with regard to AOD impairment in the workplace. ---------- Method: A survey of 1165 employees of a State-based industry in Australia was conducted, and a response rate of 98% was achieved. The survey included questions relevant to the Theory of Planned Behaviour: behaviour; behavioural intentions; attitude; perceptions of social pressure; and perceived behavioural control with regard to workplace AOD impairment. ---------- Findings: Less than 3% of participants reported coming to work impaired by AODs. Fewer than 2% of participants reported that they intended to come to work impaired by AODs. The majority of participants (over 80%) reported unfavourable attitudes toward AOD impairment at work. Logistic regression analyses suggest that, consistent with the theory of planned behaviour: attitudes, perceptions of social pressure, and perceived behavioural control with regard to workplace AOD impairment, all predict behavioural intentions (P < .001); and behavioural intentions predict (self-reported) behaviour regarding workplace AOD impairment (P < .001). ---------- Conclusions: The Theory of Planned Behaviour appears to assist with understanding the mechanisms by which behaviour change occurs with regard to AOD impairment in the workplace. An occupational AOD programme which targets those mechanisms for change may improve its impact in preventing workplace AOD impairment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information behavior models generally focus on one of many aspects of information behavior, either information finding, conceptualized as information seeking, information foraging or information sense-making, information organizing and information using. This ongoing study is developing an integrated model of information behavior. The research design involves a 2-week-long daily information journal self-maintained by the participants, combined with two interviews, one before, and one after the journal-keeping period. The data from the study will be analyzed using grounded theory to identify when the participants engage in the various behaviors that have already been observed, identified, and defined in previous models, in order to generate useful sequential data and an integrated model.