25 resultados para curriculum-based measurement
Resumo:
Photon migration in a turbid medium has been modeled in many different ways. The motivation for such modeling is based on technology that can be used to probe potentially diagnostic optical properties of biological tissue. Surprisingly, one of the more effective models is also one of the simplest. It is based on statistical properties of a nearest-neighbor lattice random walk. Here we develop a theory allowing one to calculate the number of visits by a photon to a given depth, if it is eventually detected at an absorbing surface. This mimics cw measurements made on biological tissue and is directed towards characterizing the depth reached by photons injected at the surface. Our development of the theory uses formalism based on the theory of a continuous-time random walk (CTRW). Formally exact results are given in the Fourier-Laplace domain, which, in turn, are used to generate approximations for parameters of physical interest.
Resumo:
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
Resumo:
In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.
Resumo:
Abstract Purpose: Several well-known managerial accounting performance measurement models rely on causal assumptions. Whilst users of the models express satisfaction and link them with improved organizational performance, academic research, of the realworld applications, shows few reliable statistical associations. This paper provides a discussion on the"problematic" of causality in a performance measurement setting. Design/methodology/approach: This is a conceptual study based on an analysis and synthesis of the literature from managerial accounting, organizational theory, strategic management and social scientific causal modelling. Findings: The analysis indicates that dynamic, complex and uncertain environments may challenge any reliance upon valid causal models. Due to cognitive limitations and judgmental biases, managers may fail to trace correct cause-and-effect understanding of the value creation in their organizations. However, even lacking this validity, causal models can support strategic learning and perform as organizational guides if they are able to mobilize managerial action. Research limitations/implications: Future research should highlight the characteristics necessary for elaboration of convincing and appealing causal models and the social process of their construction. Practical implications: Managers of organizations using causal models should be clear on the purposes of their particular models and their limitations. In particular, difficulties are observed in specifying detailed cause and effect relations and their potential for communicating and directing attention. They should therefore construct their models to suit the particular purpose envisaged. Originality/value: This paper provides an interdisciplinary and holistic view on the issue of causality in managerial accounting models.
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecular hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.
Resumo:
Extension of shelf life and preservation of products are both very important for the food industry. However, just as with other processes, speed and higher manufacturing performance are also beneficial. Although microwave heating is utilized in a number of industrial processes, there are many unanswered questions about its effects on foods. Here we analyze whether the effects of microwave heating with continuous flow are equivalent to those of traditional heat transfer methods. In our study, the effects of heating of liquid foods by conventional and continuous flow microwave heating were studied. Among other properties, we compared the stability of the liquid foods between the two heat treatments. Our goal was to determine whether the continuous flow microwave heating and the conventional heating methods have the same effects on the liquid foods, and, therefore, whether microwave heat treatment can effectively replace conventional heat treatments. We have compared the colour, separation phenomena of the samples treated by different methods. For milk, we also monitored the total viable cell count, for orange juice, vitamin C contents in addition to the taste of the product by sensory analysis. The majority of the results indicate that the circulating coil microwave method used here is equivalent to the conventional heating method based on thermal conduction and convection. However, some results in the analysis of the milk samples show clear differences between heat transfer methods. According to our results, the colour parameters (lightness, red-green and blue-yellow values) of the microwave treated samples differed not only from the untreated control, but also from the traditional heat treated samples. The differences are visually undetectable, however, they become evident through analytical measurement with spectrophotometer. This finding suggests that besides thermal effects, microwave-based food treatment can alter product properties in other ways as well.
Resumo:
A new family of distortion risk measures -GlueVaR- is proposed in Belles- Sampera et al. -2013- to procure a risk assessment lying between those provided by common quantile-based risk measures. GlueVaR risk measures may be expressed as a combination of these standard risk measures. We show here that this relationship may be used to obtain approximations of GlueVaR measures for general skewed distribution functions using the Cornish-Fisher expansion. A subfamily of GlueVaR measures satisfies the tail-subadditivity property. An example of risk measurement based on real insurance claim data is presented, where implications of tail-subadditivity in the aggregation of risks are illustrated.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
This work proposes a fully-digital interface circuit for the measurement of inductive sensors using a low-cost microcontroller (µC) and without any intermediate active circuit. Apart from the µC and the sensor, the circuit just requires an external resistor and a reference inductance so that two RL circuits with a high-pass filter (HPF) topology are formed. The µC appropriately excites such RL circuits in order to measure the discharging time of the voltage across each inductance (i.e. sensing and reference) and then it uses such discharging times to estimate the sensor inductance. Experimental tests using a commercial µC show a non-linearity error (NLE) lower than 0.5%FSS (Full-Scale Span) when measuring inductances from 1 mH to 10 mH, and from 10 mH to 100 mH.