995 resultados para Accounting measurement
Resumo:
Cochin University of Science and Technology
Resumo:
Antennas are necessary and vital components of communication and radar systems, but sometimes their inability to adjust to new operating scenarios can limit system performance. Reconfigurable antennas can adjust with changing system requirements or environmental conditions and provide additional levels of functionality that may result in wider instantaneous frequency bandwidths, more extensive scan volumes, and radiation patterns with more desirable side lobe distributions. Their agility and diversity created new horizons for different types of applications especially in cognitive radio, Multiple Input Multiple Output Systems, satellites and many other applications. Reconfigurable antennas satisfy the requirements for increased functionality, such as direction finding, beam steering, radar, control and command, within a confined volume. The intelligence associated with the reconfigurable antennas revolved around switching mechanisms utilized. In the present work, we have investigated frequency reconfigurable polarization diversity antennas using two methods: 1. By using low-loss, high-isolation switches such as PIN diode, the antenna can be structurally reconfigured to maintain the elements near their resonant dimensions for different frequency bands and/or polarization. 2. Secondly, the incorporation of variable capacitors or varactors, to overcome many problems faced in using switches and their biasing. The performances of these designs have been studied using standard simulation tools used in industry/academia and they have been experimentally verified. Antenna design guidelines are also deduced by accounting the resonances. One of the major contributions of the thesis lies in the analysis of the designed antennas using FDTD based numerical computation to validate their performance.
Resumo:
In this paper we show that the orthorhombic phase of FeSi2 (stable at room temperature) displays a sizable anisotropy in the infrared spectra, with minor effects in the Raman data too. This fact is not trivial at all, since the crystal structure corresponds to a moderate distortion of the fluorite symmetry. Our analysis is carried out on small single crystals grown by flux transport, through polarization-resolved far-infrared reflectivity and Raman measurements. Their interpretation has been obtained by means of the simulated spectra with tight-binding molecular dynamics.
Resumo:
Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
Our essay aims at studying suitable statistical methods for the clustering of compositional data in situations where observations are constituted by trajectories of compositional data, that is, by sequences of composition measurements along a domain. Observed trajectories are known as “functional data” and several methods have been proposed for their analysis. In particular, methods for clustering functional data, known as Functional Cluster Analysis (FCA), have been applied by practitioners and scientists in many fields. To our knowledge, FCA techniques have not been extended to cope with the problem of clustering compositional data trajectories. In order to extend FCA techniques to the analysis of compositional data, FCA clustering techniques have to be adapted by using a suitable compositional algebra. The present work centres on the following question: given a sample of compositional data trajectories, how can we formulate a segmentation procedure giving homogeneous classes? To address this problem we follow the steps described below. First of all we adapt the well-known spline smoothing techniques in order to cope with the smoothing of compositional data trajectories. In fact, an observed curve can be thought of as the sum of a smooth part plus some noise due to measurement errors. Spline smoothing techniques are used to isolate the smooth part of the trajectory: clustering algorithms are then applied to these smooth curves. The second step consists in building suitable metrics for measuring the dissimilarity between trajectories: we propose a metric that accounts for difference in both shape and level, and a metric accounting for differences in shape only. A simulation study is performed in order to evaluate the proposed methodologies, using both hierarchical and partitional clustering algorithm. The quality of the obtained results is assessed by means of several indices
Resumo:
Resumen tomado de la publicación
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
This is a research paper. Research presented in this paper aimed to investigate how to measure collaborative design performance and, in turn, improve the final design output during a design process, with a clear objective to develop a Design Performance Measurement (DPM) matrix to measure design project team member's design collaboration performance.
Resumo:
La obtención de una ventaja competitiva, el desarrollo, el crecimiento, la perdurabilidad, entre otros, son los aspectos que buscan las organizaciones a través de las estrategias que se definen. Sin embargo, no es suficiente con diseñar las metas y los objetivos que se quieren alcanzar, es necesario aterrizar estos propósitos en planes de acción e involucrar a todos los miembros de la organización, lo cual se consigue a través de la implantación de la estrategia. En este sentido, la etapa de implantación de la estrategia en una organización, da curso al camino establecido en la etapa de formulación de la estrategia, por lo tanto, se relaciona directamente con su éxito o su fracaso. No obstante, este proceso no depende de algunos pocos miembros de la organización, de directivos o de funcionarios, sino que depende de la buena sincronización y armonía de todos aquellos que hacen parte de ella. La presente investigación a través de la revisión teórica y de evidencias empíricas, busca poner de manifiesto la incidencia de dos aspectos clave en la organización sobre la implantación de la estrategia, por un lado, los líderes, a partir de sus competencias interpersonales y por otro el capital humano, a partir de sus valores. Los resultados obtenidos muestran que tanto las competencias del líder como los valores del capital humano son determinantes para la adecuada implantación de la estrategia organizacional.
Resumo:
La implementación del MCS es una necesidad que demandan las organizaciones en la medida en que incrementan de tamaño, pero la experiencia muestra que esta metodología tiene casos de éxito como de fracaso, por lo que es importante identificar y contemplar los factores que influyen en la implementación para que el sistema sea efectivo. Este proyecto pretende analizar las variables y herramientas para la implementación de un MCS en una organización. Para este análisis se hizo una amplia revisión literaria teórica y práctica. Finalmente el resultado que se obtuvo fue definir cuáles son los factores determinantes para la implementación de un MCS efectivo en una empresa.
Resumo:
Las tecnologías de la información han empezado a ser un factor importante a tener en cuenta en cada uno de los procesos que se llevan a cabo en la cadena de suministro. Su implementación y correcto uso otorgan a las empresas ventajas que favorecen el desempeño operacional a lo largo de la cadena. El desarrollo y aplicación de software han contribuido a la integración de los diferentes miembros de la cadena, de tal forma que desde los proveedores hasta el cliente final, perciben beneficios en las variables de desempeño operacional y nivel de satisfacción respectivamente. Por otra parte es importante considerar que su implementación no siempre presenta resultados positivos, por el contrario dicho proceso de implementación puede verse afectado seriamente por barreras que impiden maximizar los beneficios que otorgan las TIC.