956 resultados para Covering law model
Resumo:
This article examines the current transfer pricing regime to consider whether it is a sound model to be applied to modern multinational entities. The arm's length price methodology is examined to enable a discussion of the arguments in favour of such a regime. The article then refutes these arguments concluding that, contrary to the very reason multinational entities exist, applying arm's length rules involves a legal fiction of imagining transactions between unrelated parties. Multinational entities exist to operate in a way that independent entities would not, which the arm's length rules fail to take into account. As such, there is clearly an air of artificiality in applying the arm's length standard. To demonstrate this artificiality with respect to modern multinational entities, multinational banks are used as an example. The article concluded that the separate entity paradigm adopted by the traditional transfer pricing regime is incongruous with the economic theory of modern multinational enterprises.
Resumo:
The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.
A simulation model of cereal-legume intercropping systems for semi-arid regions I. Model development
Resumo:
Cereal-legume intercropping plays an important role in subsistence food production in developing countries, especially in situations of limited water resources. Crop simulation can be used to assess risk for intercrop productivity over time and space. In this study, a simple model for intercropping was developed for cereal and legume growth and yield, under semi-arid conditions. The model is based on radiation interception and use, and incorporates a water stress factor. Total dry matter and yield are functions of photosynthetically active radiation (PAR), the fraction of radiation intercepted and radiation use efficiency (RUE). One of two PAR sub-models was used to estimate PAR from solar radiation; either PAR is 50% of solar radiation or the ratio of PAR to solar radiation (PAR/SR) is a function of the clearness index (K-T). The fraction of radiation intercepted was calculated either based on Beer's Law with crop extinction coefficients (K) from field experiments or from previous reports. RUE was calculated as a function of available soil water to a depth of 900 mm (ASW). Either the soil water balance method or the decay curve approach was used to determine ASW. Thus, two alternatives for each of three factors, i.e., PAR/SR, K and ASW, were considered, giving eight possible models (2 methods x 3 factors). The model calibration and validation were carried out with maize-bean intercropping systems using data collected in a semi-arid region (Bloemfontein, Free State, South Africa) during seven growing seasons (1996/1997-2002/2003). The combination of PAR estimated from the clearness index, a crop extinction coefficient from the field experiment and the decay curve model gave the most reasonable and acceptable result. The intercrop model developed in this study is simple, so this modelling approach can be employed to develop other cereal-legume intercrop models for semi-arid regions. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
We develop a model for exponential decay of broadband pulses, and examine its implications for experiments on optical precursors. One of the signature features of Brillouin precursors is attenuation with a less rapid decay than that predicted by Beer's Law. Depending on the pulse parameters and the model that is adopted for the dielectric properties of the medium, the limiting z-dependence of the loss has been described as z(-1/2), z(-1/3), exponential, or, in more detailed descriptions, some combination of the above. Experimental results in the search for precursors are examined in light of the different models, and a stringent test for sub-exponential decay is applied to data on propagation of 500 femtosecond pulses through 1-5 meters of water. (C) 2005 Optical Society of America.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.
Resumo:
Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
This second edition contains many new questions covering recent developments in the field of landlord and tenant law including Bruton v London and Quadrant Housing Trust, Hemmingway Securities Ltd v Dunraven Ltd, British Telecommunications plc v Sun Life Assurance Society plc and Graysim Holdings Ltd v P&O Property Holdings Ltd. New topics covered also include the Landlord and Tenant (Covenant) Act 1995, the Contracts (Rights of Third Parties) Act 1999 and the Agricultural Tenancies Act 1995. In addition the authors have made substantial revisions to existing questions in order to bring them in line with recent case law and statutory provisions, which include the Housing Act 1996 and the Unfair Terms in Consumer Contracts Regulations 1999. The book also contains guidance on examination technique and achieving success in the exam.
Resumo:
The wear rates of sliding surfaces are significantly reduced if mild oxidational wear can be encouraged. It is hence of prime importance in the interest of component life and material conservation to understand the factors necessary to promote mild, oxidational wear, The present work investigates the fundamental mechanism of the running-in wear of BS EN 31!EN 8 steel couples. under various conditions of load. speed and test duration. Unidirectional sliding experiments were carried out on a pin-on~disc wear machine where frictional force, wear rate, temperature and contact resistance were continuously monitored during each test. Physical methods of analysis (x-ray, scanning electron microscopy etc.) were used to examine the wear debris and worn samples. The wear rate versus load curves revealed mild wear transitions, which under long duration of running, categorized mild wear into four distinct regions.α-Fe20s. Fe304, FeO and an oxide mixture were the predominant oxides in four regions of oxidational wear which were identified above the Welsh T2 transition. The wear curves were strongly effected by the speed and test duration. A surface model was used to calculate the surface parameters, and the results were found to be comparable with the experimentally observed parameters. Oxidation was responsible for the transition from severe to mild wear at a load corresponding to the Welsh T2 transition. In the running-in period sufficient energy input and surface hardness enabled oxide growth rate to increase and eventually exceeded the rate of removal, where mild wear ensued. A model was developed to predict the wear volume up to the transition. Remarkable agreement was found between the theoretical prediction and the experimentally-measured values. The oxidational mechanjsm responsible for transitjon to mild wear under equilibrium conditions was related to the formation of thick homogenous oxide plateaux on subsurface hardened layers, FeO was the oxide formed initially at the onset of mild wear but oxide type changed.during the total running period to give an equilibrium oxide whose nature depended on the loads applied.
Resumo:
Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.
Resumo:
The Aston Centre for Human Resources (ACHR) was created at Aston Business School, Aston University, in February 2006. The mission of the Centre is both to inform and influence practice through conducting high quality, challenging research in order to extend the existing theoretical frameworks and to develop new and relevant conceptual models to represent and guide the changing realities facing businesses and the people they employ in the 21st century. * Students studying an Employment Law module on a HR or general business degree, whether undergraduate or postgraduate. * Students taking the Employment Law elective on the CIPD's Professional Development Scheme (PDS). * Students studying Employee Relations or Diversity. This new edition has been thoroughly updated, and includes expanded coverage of the impact of EU Law, and Discrimination Law including ageism, sexual orientation, religious belief, harassment and disability. The text is ideal text for those business students on undergraduate and postgraduate courses who are taking a first module in Employment Law. It covers a comprehensive range of topics enabling students to gain a solid understanding of the key principles of the subject. The engaging, authoritative writing style and range of learning features make this a refreshingly accessible and student-friendly read. Each chapter includes summaries of topical and relevant cases, direction to key sources of legal information and suggestions for further reading whilst covering the CIPD’s standards for the Employment Law elective on the Professional Development Scheme (PDS). This text includes a range of case studies, tasks and examples to consolidate learning and includes a brand new section on Employment Law study skills to help students get to grips with how to access and read law reports, understand the sources of the law, find and use up-to-date legal information (particularly websites) and how to prepare for exams and written assignments.
Resumo:
Particle breakage due to fluid flow through various geometries can have a major influence on the performance of particle/fluid processes and on the product quality characteristics of particle/fluid products. In this study, whey protein precipitate dispersions were used as a case study to investigate the effect of flow intensity and exposure time on the breakage of these precipitate particles. Computational fluid dynamic (CFD) simulations were performed to evaluate the turbulent eddy dissipation rate (TED) and associated exposure time along various flow geometries. The focus of this work is on the predictive modelling of particle breakage in particle/fluid systems. A number of breakage models were developed to relate TED and exposure time to particle breakage. The suitability of these breakage models was evaluated for their ability to predict the experimentally determined breakage of the whey protein precipitate particles. A "power-law threshold" breakage model was found to provide a satisfactory capability for predicting the breakage of the whey protein precipitate particles. The whey protein precipitate dispersions were propelled through a number of different geometries such as bends, tees and elbows, and the model accurately predicted the mean particle size attained after flow through these geometries. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
The cross-country petroleum pipelines are environmentally sensitive because they traverse through varied terrain covering crop fields, forests, rivers, populated areas, desert, hills and offshore. Any malfunction of these pipelines may cause devastating effect on the environment. Hence, the pipeline operators plan and design pipelines projects with sufficient consideration of environment and social aspects along with the technological alternatives. Traditionally, in project appraisal, optimum technical alternative is selected using financial analysis. Impact assessments (IA) are then carried out to justify the selection and subsequent statutory approval. However, the IAs often suggest alternative sites and/or alternate technology and implementation methodology, resulting in revision of entire technical and financial analysis. This study addresses the above issues by developing an integrated framework for project feasibility analysis with the application of analytic hierarchy process (AHP), a multiple attribute decision-making technique. The model considers technical analysis (TA), socioeconomic IA (SEIA) and environmental IA (EIA) in an integrated framework to select the best project from a few alternative feasible projects. Subsequent financial analysis then justifies the selection. The entire methodology has been explained here through a case application on cross-country petroleum pipeline project in India.
Resumo:
The operating model of knowledge quantum engineering for identification and prognostic decision- making in conditions of α-indeterminacy is suggested in the article. The synthesized operating model solves three basic tasks: Аt-task to formalize tk-knowledge; Вt-task to recognize (identify) objects according to observed results; Сt-task to extrapolate (prognosticate) the observed results. Operating derivation of identification and prognostic decisions using authentic different-level algorithmic knowledge quantum (using tRAKZ-method) assumes synthesis of authentic knowledge quantum database (BtkZ) using induction operator as a system of implicative laws, and then using deduction operator according to the observed tk-knowledge and BtkZ a derivation of identification or prognostic decisions in a form of new tk-knowledge.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015