912 resultados para the parity principle
Resumo:
Reports results from a contingent valuation (CV) survey of willingness to pay (WTP) for the conservation of the Asian elephant of a sample of urban residents living in three selected housing schemes in Colombo, the capital of Sri Lanka. Face-to-face surveys were conducted using an interview schedule (IS). A non-linear logit regression model is used to analyse the respondents' responses for the payment principle questions and to identify the factors that influence their responses. We investigate whether urban residents' WTP for the conservation of elephants is sufficient to compensate farmers for the damage caused by elephants. We find that the beneficiaries (the urban residents) could compensate losers (the fanners in the areas affected by human-elephant conflict, HEC) and be better off than in the absence of elephants in Sri Lanka. Therefore, there is a strong economic case for the conservation of the wild elephant population in Sri Lanka. However, we have insufficient data to determine the optimal level of this elephant population in the Kaldor-Hicks sense. Nevertheless, the current population of elephant in Sri Lanka is Kaldor-Hicks preferable to having none. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Photoluminescent emission is observed from surface-passivated PbS nanocrystals following the two-photon excitation of high-energy excitonic states. The emission appears directly at the excitation energy with no detectable Stokes-shift for a wide range of excitation energies. The observation of direct emission from states excited by two-photon absorption indicates that the parity of the excited states of surface-passivated PbS nanocrystals is partially mixed.
Resumo:
The Great Barrier Reef Marine Park, an area almost the size , of Japan, has a new network of no-take areas that significantly improves the protection of biodiversity. The new marine park zoning implements, in a quantitative manner, many of the theoretical design principles discussed in the literature. For example, the new network of no-take areas has at least 20% protection per bioregion, minimum levels of protection for all known habitats and special or unique features, and minimum sizes for no-take areas of at least 10 or 20 kat across at the smallest diameter Overall, more than 33% of the Great Barrier Reef Marine Park is now in no-take areas (previously 4.5%). The steps taken leading to this outcome were to clarify to the interested public why the existing level of protection wets inadequate; detail the conservation objectives of establishing new no-take areas; work with relevant and independent experts to define, and contribute to, the best scientific process to deliver on the objectives; describe the biodiversity (e.g., map bioregions); define operational principles needed to achieve the objectives; invite community input on all of The above; gather and layer the data gathered in round-table discussions; report the degree of achievement of principles for various options of no-take areas; and determine how to address negative impacts. Some of the key success factors in this case have global relevance and include focusing initial communication on the problem to be addressed; applying the precautionary principle; using independent experts; facilitating input to decision making; conducting extensive and participatory consultation; having an existing marine park that encompassed much of the ecosystem; having legislative power under federal law; developing high-level support; ensuring agency Priority and ownership; and being able to address the issue of displaced fishers.
Resumo:
Phytochemicals have provided an abundant and effective source of therapeutics for the treatment of cancer. Here we describe the characterization of a novel plant toxin, persin, with in vivo activity in the mammary gland and a p53-, estrogen receptor-, and Bcl-2-independent mode of action. Persin was previously identified from avocado leaves as the toxic principle responsible for mammary gland-specific necrosis and apoptosis in lactating livestock. Here we used a lactating mouse model to confirm that persin has a similar cytotoxicity for the lactating mammary epithelium. Further in vitro studies in a panel of human breast cancer cell lines show that persin selectively induces a G(2)-M cell cycle arrest and caspase-dependent apoptosis in sensitive cells. The latter is dependent on expression of the BH3-only protein Bim. Bim is a sensor of cytoskeletal integrity, and there is evidence that unique structure of the compound, persin could represent a novel class of microtubule-targeting agent with potential specificity for breast cancers.
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
The basis of the present authors' edge-to-edge matching model for understanding the crystallography of partially coherent precipitates is the minimization of the energy of the interface between the two phases. For relatively simple crystal structures, this energy minimization occurs when close-packed, or relatively close-packed, rows of atoms match across the interface. Hence, the fundamental principle behind edge-to-edge matching is that the directions in each phase that correspond to the edges of the planes that meet in the interface should be close-packed, or relatively close-packed, rows of atoms. A few of the recently reported examples of what is termed edge-to-edge matching appear to ignore this fundamental principle. By comparing theoretical predictions with available experimental data, this article will explore the validity of this critical atom-row coincidence condition, in situations where the two phases have simple crystal Structures and in those where the precipitate has a more complex structure.
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
The paper illustrates the role of world knowledge in comprehending and translating texts. A short news item, which displays world knowledge fairly implicitly in condensed lexical forms, was translated by students from English into German. It is shown that their translation strategies changed from a first draft which was rather close to the surface structure of the source text to a final version which took situational aspects, texttypological conventions and the different background knowledge of the respective addressees into account. Decisions on how much world knowledge has to be made explicit in the target text, however, must be based on the relevance principle. Consequences for teaching and for the notions of semantic knowledge and world knowledge are discussed.
Resumo:
This article considers the role of accounting in organisational decision making. It challenges the rational nature of decisions made in organisations through the use of accounting models and the problems of predicting the future through the use of such models. The use of accounting in this manner is evaluated from an epochal postmodern stance. Issues raised by chaos theory and the uncertainty principle are used to demonstrate problems with the predictive ability of accounting models. The authors argue that any consideration of the predictive value of accounting needs to change to incorporate a recognition of the turbulent external environment, if it is to be of use for organisational decision making. Thus it is argued that the role of accounting as a mechanism for knowledge creation regarding the future is fundamentally flawed. We take this as a starting-point to argue for the real purpose of the use of the predictive techniques of accounting, using its ritualistic role in the context of myth creation to argue for the cultural benefits of the use of such flawed techniques.
Resumo:
We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.
Resumo:
In recent years, UK industry has seen an explosive growth in the number of `Computer Aided Production Management' (CAPM) system installations. Of the many CAPM systems, materials requirement planning/manufacturing resource planning (MRP/MRPII) is the most widely implemented. Despite the huge investments in MRP systems, over 80 percent are said to have failed within 3 to 5 years of installation. Many people now assume that Just-In-Time (JIT) is the best manufacturing technique. However, those who have implemented JIT have found that it also has many problems. The author argues that the success of a manufacturing company will not be due to a system which complies with a single technique; but due to the integration of many techniques and the ability to make them complement each other in a specific manufacturing environment. This dissertation examines the potential for integrating MRP with JIT and Two-Bin systems to reduce operational costs involved in managing bought-out inventory. Within this framework it shows that controlling MRP is essential to facilitate the integrating process. The behaviour of MRP systems is dependent on the complex interactions between the numerous control parameters used. Methodologies/models are developed to set these parameters. The models are based on the Pareto principle. The idea is to use business targets to set a coherent set of parameters, which not only enables those business targets to be realised, but also facilitates JIT implementation. It illustrates this approach in the context of an actual manufacturing plant - IBM Havant. (IBM Havant is a high volume electronics assembly plant with the majority of the materials bought-out). The parameter setting models are applicable to control bought-out items in a wide range of industries and are not dependent on specific MRP software. The models have produced successful results in several companies and are now being developed as commercial products.
Resumo:
This work has used novel polymer design and fabrication technology to generate bead form polymer based systems, with variable, yet controlled release properties, specifically for the delivery of macromolecules, essentially peptides of therapeutic interest. The work involved investigation of the potential interaction between matrix ultrastructural morphology, in vitro release kinetics, bioactivity and immunoreactivity of selected macromolecules with limited hydrolytic stability, delivered from controlled release vehicles. The underlying principle involved photo-polymerisation of the monomer, hydroxyethyl methacrylate, around frozen ice crystals, leading to the production of a macroporous hydrophilic matrix. Bead form matrices were fabricated in controllable size ranges in the region of 100µm - 3mm in diameter. The initial stages of the project involved the study of how variables, delivery speed of the monomer and stirring speed of the non solvent, affectedthe formation of macroporous bead form matrices. From this an optimal bench system for bead production was developed. Careful selection of monomer, solvents, crosslinking agent and polymerisation conditions led to a variable but controllable distribution of pore sizes (0.5 - 4µm). Release of surrogate macromolecules, bovine serum albumin and FITC-linked dextrans, enabled factors relating to the size and solubility of the macromolecule on the rate of release to be studied. Incorporation of bioactive macromolecules allowed retained bioactivity to be determined (glucose oxidase and interleukin-2), whilst the release of insulin enabled determination of both bioactivity (using rat epididymal fat pad) and immunoreactivity (RIA). The work carried out has led to the generation of macroporous bead form matrices, fabricated from a tissue biocompatible hydrogel, capable of the sustained, controlled release of biologically active peptides, with potential use in the pharmaceutical and agrochemical industries.
Resumo:
Golfers, coaches and researchers alike, have all keyed in on golf putting as an important aspect of overall golf performance. Of the three principle putting tasks (green reading, alignment and the putting action phase), the putting action phase has attracted the most attention from coaches, players and researchers alike. This phase includes the alignment of the club with the ball, the swing, and ball contact. A significant amount of research in this area has focused on measuring golfer’s vision strategies with eye tracking equipment. Unfortunately this research suffers from a number of shortcomings, which limit its usefulness. The purpose of this thesis was to address some of these shortcomings. The primary objective of this thesis was to re-evaluate golfer’s putting vision strategies using binocular eye tracking equipment and to define a new, optimal putting vision strategy which was associated with both higher skill and success. In order to facilitate this research, bespoke computer software was developed and validated, and new gaze behaviour criteria were defined. Additionally, the effects of training (habitual) and competition conditions on the putting vision strategy were examined, as was the effect of ocular dominance. Finally, methods for improving golfer’s binocular vision strategies are discussed, and a clinical plan for the optometric management of the golfer’s vision is presented. The clinical management plan includes the correction of fundamental aspects of golfers’ vision, including monocular refractive errors and binocular vision defects, as well as enhancement of their putting vision strategy, with the overall aim of improving performance on the golf course. This research has been undertaken in order to gain a better understanding of the human visual system and how it relates to the sport performance of golfers specifically. Ultimately, the analysis techniques and methods developed are applicable to the assessment of visual performance in all sports.
Resumo:
This paper proposes an integrative framework for the conduct of a more thorough and robust analysis regarding the linkage between Human Resource Management (HRM) and business performance. In order to provide the required basis for the proposed framework, initially, the core aspects of the main HRM models predicting business performance are analysed. The framework proposes both the principle of mediation (i.e. HRM outcomes mediate the relationship between organisational strategies and business performance) and the perspective of simultaneity of decision-making by firms with regard to the consideration of business strategies and HRM policies. In order to empirically test this framework the methodological approach of 'structural equation models' is employed. The empirical research is based on a sample of 178 organisations operating in the Greek manufacturing sector. The paper concludes that both the mediation principle and the simultaneity perspective are supported, emphasising further the positive role of HRM outcomes towards organisational performance.