291 resultados para Hilbert-Mumford criterion
Resumo:
Clinical experience plays an important role in the development of expertise, particularly when coupled with reflection on practice. There is debate, however, regarding the amount of clinical experience that is required to become an expert. Various lengths of practice have been suggested as suitable for determining expertise, ranging from five years to 15 years. This study aimed to investigate the association between length of experience and therapists’ level of expertise in the field of cerebral palsy with upper limb hypertonicity using an empirical procedure named Cochrane–Weiss–Shanteau (CWS). The methodology involved re-analysis of quantitative data collected in two previous studies. In Study 1, 18 experienced occupational therapists made hypothetical clinical decisions related to 110 case vignettes, while in Study 2, 29 therapists considered 60 case vignettes drawn randomly from those used in Study 1. A CWS index was calculated for each participant's case decisions. Then, in each study, Spearman's rho was calculated to identify the correlations between the duration of experience and level of expertise. There was no significant association between these two variables in both studies. These analyses corroborated previous findings of no association between length of experience and judgemental performance. Therefore, length of experience may not be an appropriate criterion for determining level of expertise in relation to cerebral palsy practice.
Resumo:
Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.
Resumo:
Promoted ignition testing (NASA) Test 17) [1] is used to determine the relative flammability of metal rods in oxygen-enriched atmospheres. A promotor is used to ignite a metal sample rod, initiating sample burning. If a predetermined length of the sample burns, beyond the promotor, the material is considered flammable at the condition tested. Historically, this burn length has been somewhat arbitrary. Experiments were performed to better understand this test by obtaining insight into the effect a burning promotor has on the preheating of a test sample. Test samples of several metallic materials were prepared and coupled to fast-responding thermocouples along their length. Thermocouple measurements and test video were synchronized to determine temperature increase with respect to time and length along each test sample. A recommended flammability burn length, based on a sample preheat of 500 degrees fahrenheit, was determined based on the preheated zone measured from these tests. This length was determined to be 30 mm (1.18 in.). Validation of this length and its rationale are presented.
Resumo:
We present the findings of a study into the implementation of explicitly criterion- referenced assessment in undergraduate courses in mathematics. We discuss students' concepts of criterion referencing and also the various interpretations that this concept has among mathematics educators. Our primary goal was to move towards a classification of criterion referencing models in quantitative courses. A secondary goal was to investigate whether explicitly presenting assessment criteria to students was useful to them and guided them in responding to assessment tasks. The data and feedback from students indicates that while students found the criteria easy to understand and useful in informing them as to how they would be graded, it did not alter the way the actually approached the assessment activity.
Resumo:
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs. We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.
Resumo:
Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.
Resumo:
Fibrous scaffolds of engineered structures can be chosen as promising porous environments when an approved criterion validates their applicability for a specific medical purpose. For such biomaterials, this paper sought to investigate various structural characteristics in order to determine whether they are appropriate descriptors. A number of poly(3-hydroxybutyrate) scaffolds were electrospun; each of which possessed a distinguished architecture when their material and processing conditions were altered. Subsequent culture of mouse fibroblast cells (L929) was carried out to evaluate the cells viability on each scaffold after their attachment for 24 h and proliferation for 48 and 72 h. The scaffolds’ porosity, pores number, pores size and distribution were quantified and none could establish a relationship with the viability results. Virtual reconstruction of the mats introduced an authentic criterion, “Scaffold Percolative Efficiency” (SPE), with which the above descriptors were addressed collectively. It was hypothesized to be able to quantify the efficacy of fibrous scaffolds by considering the integration of porosity and interconnectivity of the pores. There was a correlation of 80% as a good agreement between the SPE values and the spectrophotometer absorbance of viable cells; a viability of more than 350% in comparison to that of the controls.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
Complex numbers are a fundamental aspect of the mathematical formalism of quantum physics. Quantum-like models developed outside physics often overlooked the role of complex numbers. Specifically, previous models in Information Retrieval (IR) ignored complex numbers. We argue that to advance the use of quantum models of IR, one has to lift the constraint of real-valued representations of the information space, and package more information within the representation by means of complex numbers. As a first attempt, we propose a complex-valued representation for IR, which explicitly uses complex valued Hilbert spaces, and thus where terms, documents and queries are represented as complex-valued vectors. The proposal consists of integrating distributional semantics evidence within the real component of a term vector; whereas, ontological information is encoded in the imaginary component. Our proposal has the merit of lifting the role of complex numbers from a computational byproduct of the model to the very mathematical texture that unifies different levels of semantic information. An empirical instantiation of our proposal is tested in the TREC Medical Record task of retrieving cohorts for clinical studies.
Resumo:
The authors must be congratulated for their original and important study. The flooding of urbanised areas constitutes a hazard to the population and infrastructure. Floods through inundated urban environments have been studied only recently and few considered the potential impact of flowing waters on pedestrians...
Resumo:
Background Heatwaves could cause the population excess death numbers to be ranged from tens to thousands within a couple of weeks in a local area. An excess mortality due to a special event (e.g., a heatwave or an epidemic outbreak) is estimated by subtracting the mortality figure under ‘normal’ conditions from the historical daily mortality records. The calculation of the excess mortality is a scientific challenge because of the stochastic temporal pattern of the daily mortality data which is characterised by (a) the long-term changing mean levels (i.e., non-stationarity); (b) the non-linear temperature-mortality association. The Hilbert-Huang Transform (HHT) algorithm is a novel method originally developed for analysing the non-linear and non-stationary time series data in the field of signal processing, however, it has not been applied in public health research. This paper aimed to demonstrate the applicability and strength of the HHT algorithm in analysing health data. Methods Special R functions were developed to implement the HHT algorithm to decompose the daily mortality time series into trend and non-trend components in terms of the underlying physical mechanism. The excess mortality is calculated directly from the resulting non-trend component series. Results The Brisbane (Queensland, Australia) and the Chicago (United States) daily mortality time series data were utilized for calculating the excess mortality associated with heatwaves. The HHT algorithm estimated 62 excess deaths related to the February 2004 Brisbane heatwave. To calculate the excess mortality associated with the July 1995 Chicago heatwave, the HHT algorithm needed to handle the mode mixing issue. The HHT algorithm estimated 510 excess deaths for the 1995 Chicago heatwave event. To exemplify potential applications, the HHT decomposition results were used as the input data for a subsequent regression analysis, using the Brisbane data, to investigate the association between excess mortality and different risk factors. Conclusions The HHT algorithm is a novel and powerful analytical tool in time series data analysis. It has a real potential to have a wide range of applications in public health research because of its ability to decompose a nonlinear and non-stationary time series into trend and non-trend components consistently and efficiently.