970 resultados para Statistical performance indexes
Resumo:
In this paper, a computer-based tool is developed to analyze student performance along a given curriculum. The proposed software makes use of historical data to compute passing/failing probabilities and simulates future student academic performance based on stochastic programming methods (MonteCarlo) according to the specific university regulations. This allows to compute the academic performance rates for the specific subjects of the curriculum for each semester, as well as the overall rates (the set of subjects in the semester), which are the efficiency rate and the success rate. Additionally, we compute the rates for the Bachelors degree, which are the graduation rate measured as the percentage of students who finish as scheduled or taking an extra year and the efficiency rate (measured as the percentage of credits of the curriculum with respect to the credits really taken). In Spain, these metrics have been defined by the National Quality Evaluation and Accreditation Agency (ANECA). Moreover, the sensitivity of the performance metrics to some of the parameters of the simulator is analyzed using statistical tools (Design of Experiments). The simulator has been adapted to the curriculum characteristics of the Bachelor in Engineering Technologies at the Technical University of Madrid(UPM).
Resumo:
Evolutionary algorithms are suitable to solve damage identification problems in a multi-objective context. However, the performance of these methods can deteriorate quickly with increasing noise intensities originating numerous uncertainties. In this paper, a statistic structural damage detection method formulated in a multi-objective context is proposed. The statistic analysis is implemented to take into account the uncertainties existing in the structural model and measured structural modal parameters. The presented method is verified by a number of simulated damage scenarios. The effects of noise and damage levels on damage detection are investigated.
Resumo:
El propósito de esta tesis es estudiar la aproximación a los fenómenos de transporte térmico en edificación acristalada a través de sus réplicas a escala. La tarea central de esta tesis es, por lo tanto, la comparación del comportamiento térmico de modelos a escala con el correspondiente comportamiento térmico del prototipo a escala real. Los datos principales de comparación entre modelo y prototipo serán las temperaturas. En el primer capítulo del Estado del Arte de esta tesis se hará un recorrido histórico por los usos de los modelos a escala desde la antigüedad hasta nuestro días. Dentro de éste, en el Estado de la Técnica, se expondrán los beneficios que tiene su empleo y las dificultades que conllevan. A continuación, en el Estado de la Investigación de los modelos a escala, se analizarán artículos científicos y tesis. Precisamente, nos centraremos en aquellos modelos a escala que son funcionales. Los modelos a escala funcionales son modelos a escala que replican, además, una o algunas de las funciones de sus prototipos. Los modelos a escala pueden estar distorsionados o no. Los modelos a escala distorsionados son aquellos con cambios intencionados en las dimensiones o en las características constructivas para la obtención de una respuesta específica por ejemplo, replicar el comportamiento térmico. Los modelos a escala sin distorsión, o no distorsionados, son aquellos que mantienen, en la medida de lo posible, las proporciones dimensionales y características constructivas de sus prototipos de referencia. Estos modelos a escala funcionales y no distorsionados son especialmente útiles para los arquitectos ya que permiten a la vez ser empleados como elementos funcionales de análisis y como elementos de toma de decisiones en el diseño constructivo. A pesar de su versatilidad, en general, se observará que se han utilizado muy poco estos modelos a escala funcionales sin distorsión para el estudio del comportamiento térmico de la edificación. Posteriormente, se expondrán las teorías para el análisis de los datos térmicos recogidos de los modelos a escala y su aplicabilidad a los correspondientes prototipos a escala real. Se explicarán los experimentos llevados a cabo, tanto en laboratorio como a intemperie. Se han realizado experimentos con modelos sencillos cúbicos a diferentes escalas y sometidos a las mismas condiciones ambientales. De estos modelos sencillos hemos dado el salto a un modelo reducido de una edificación acristalada relativamente sencilla. Los experimentos consisten en ensayos simultáneos a intemperie del prototipo a escala real y su modelo reducido del Taller de Prototipos de la Escuela Técnica Superior de Arquitectura de Madrid (ETSAM). Para el análisis de los datos experimentales hemos aplicado las teorías conocidas, tanto comparaciones directas como el empleo del análisis dimensional. Finalmente, las simulaciones nos permiten comparaciones flexibles con los datos experimentales, por ese motivo, hemos utilizado tanto programas comerciales como un algoritmo de simulación desarrollado ad hoc para esta investigación. Finalmente, exponemos la discusión y las conclusiones de esta investigación. Abstract The purpose of this thesis is to study the approximation to phenomena of heat transfer in glazed buildings through their scale replicas. The central task of this thesis is, therefore, the comparison of the thermal performance of scale models without distortion with the corresponding thermal performance of their full-scale prototypes. Indoor air temperatures of the scale model and the corresponding prototype are the data to be compared. In the first chapter on the State of the Art, it will be shown a broad vision, consisting of a historic review of uses of scale models, from antiquity to our days. In the section State of the Technique, the benefits and difficulties associated with their implementation are presented. Additionally, in the section State of the Research, current scientific papers and theses on scale models are reviewed. Specifically, we focus on functional scale models. Functional scale models are scale models that replicate, additionally, one or some of the functions of their corresponding prototypes. Scale models can be distorted or not. Scale models with distortion are considered scale models with intentional changes, on one hand, in dimensions scaled unevenly and, on the other hand, in constructive characteristics or materials, in order to get a specific performance for instance, a specific thermal performance. Consequently, scale models without distortion, or undistorted scale models scaled evenly, are those replicating, to the extent possible, without distortion, the dimensional proportions and constructive configurations of their prototypes of reference. These undistorted and functional scale models are especially useful for architects because they can be used, simultaneously, as functional elements of analysis and as decision-making elements during the design. Although they are versatile, in general, it is remarkable that these types of models are used very little for the study of the thermal performance of buildings. Subsequently, the theories related to the analysis of the experimental thermal data collected from the scale models and their applicability to the corresponding full-scale prototypes, will be explained. Thereafter, the experiments in laboratory and at outdoor conditions are detailed. Firstly, experiments carried out with simple cube models at different scales are explained. The prototype larger in size and the corresponding undistorted scale model have been subjected to same environmental conditions in every experimental test. Secondly, a step forward is taken carrying out some simultaneous experimental tests of an undistorted scale model, replica of a relatively simple lightweight and glazed building construction. This experiment consists of monitoring the undistorted scale model of the prototype workshop located in the School of Architecture (ETSAM) of the Technical University of Madrid (UPM). For the analysis of experimental data, known related theories and resources are applied, such as, direct comparisons, statistical analyses, Dimensional Analysis and last, but not least important, simulations. Simulations allow us, specifically, flexible comparisons with experimental data. Here, apart the use of the simulation software EnergyPlus, a simulation algorithm is developed ad hoc for this research. Finally, the discussion and conclusions of this research are exposed.
Resumo:
In this study, we estimate the statistical significance of structure prediction by threading. We introduce a single parameter ɛ that serves as a universal measure determining the probability that the best alignment is indeed a native-like analog. Parameter ɛ takes into account both length and composition of the query sequence and the number of decoys in threading simulation. It can be computed directly from the query sequence and potential of interactions, eliminating the need for sequence reshuffling and realignment. Although our theoretical analysis is general, here we compare its predictions with the results of gapless threading. Finally we estimate the number of decoys from which the native structure can be found by existing potentials of interactions. We discuss how this analysis can be extended to determine the optimal gap penalties for any sequence-structure alignment (threading) method, thus optimizing it to maximum possible performance.
Resumo:
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble “miscodes” of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated “strength” of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was “weak,” indicating that the two types of errors were “linked.” It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding “strategy” that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved “strongly” encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the “carried over” information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the “dynamic” nature of the role hippocampus plays in delay type memory tasks.
Resumo:
The field of natural language processing (NLP) has seen a dramatic shift in both research direction and methodology in the past several years. In the past, most work in computational linguistics tended to focus on purely symbolic methods. Recently, more and more work is shifting toward hybrid methods that combine new empirical corpus-based methods, including the use of probabilistic and information-theoretic techniques, with traditional symbolic methods. This work is made possible by the recent availability of linguistic databases that add rich linguistic annotation to corpora of natural language text. Already, these methods have led to a dramatic improvement in the performance of a variety of NLP systems with similar improvement likely in the coming years. This paper focuses on these trends, surveying in particular three areas of recent progress: part-of-speech tagging, stochastic parsing, and lexical semantics.
Resumo:
This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular code structure, unconventional data manipulation instructions and not too large data-base size. We discuss the architecture of this configurable machine, and focus on its use on Monte Carlo simulations of statistical mechanics. On this class of application JANUS achieves impressive performances: in some cases one JANUS processing element outperfoms high-end PCs by a factor ≈1000. We also discuss the role of JANUS on other classes of scientific applications.
Resumo:
A perda auditiva no idoso acarreta em dificuldade na percepção da fala. O teste comumente utilizado na logoaudiometria é a pesquisa do índice de reconhecimento de fala máximo (IR-Max) em uma única intensidade de apresentação da fala. Entretanto, o procedimento mais adequado seria a realização do teste em diversas intensidades, visto que o índice de acerto depende da intensidade da fala no momento do teste e está relacionado com o grau e configuração da perda auditiva. A imprecisão na obtenção do IR-Max poderá gerar uma hipótese diagnóstica errônea e o insucesso no processo de intervenção na perda auditiva. Objetivo: Verificar a interferência do nível de apresentação da fala, no teste de reconhecimento de fala, em idosos com perda auditiva sensorioneural com diferentes configurações audiométricas. Métodos: Participaram 64 idosos, 120 orelhas (61 do gênero feminino e 59 do gênero masculino), idade entre 60 e 88 anos, divididos em grupos: G1- composto por 23 orelhas com configuração horizontal, G2- 55 orelhas com configuração descendente, G3- 42 orelhas com configuração abrupta. Os critérios de inclusão foram: perda auditiva sensorioneural de grau leve a severo, não usuário de aparelho de amplificação sonora individual (AASI), ou com tempo de uso inferior a dois meses, e ausência de alterações cognitivas. Foram realizados os seguintes procedimentos: pesquisas do limiar de reconhecimento de fala (LRF), do índice de reconhecimento de fala (IRF) em diversas intensidades e do nível de máximo conforto (MCL) e desconforto (UCL) para a fala. Para tal, foram utilizadas listas com 11 monossílabos, para diminuir a duração do teste. A análise estatística foi composta pelo teste Análise de Variância (ANOVA) e teste de Tukey. Resultados: A configuração descendente foi a de maior ocorrência. Indivíduos com configuração horizontal apresentaram índice médio de acerto mais elevado de reconhecimento de fala. Ao considerar o total avaliado, 27,27% dos indivíduos com configuração horizontal revelaram o IR-Max no MCL, assim como 38,18% com configuração descendente e 26,19% com configuração abrupta. O IR-Max foi encontrado no UCL, em 40,90% dos indivíduos com configuração horizontal, 45,45% com configuração descendente e 28,20% com configuração abrupta. Respectivamente, o maior e o menor índice médio de acerto foram encontrados em: G1- 30 e 40 dBNS; G2- 50 e 10 dBNS; G3- 45 e 10 dBNS. Não há uma única intensidade de fala a ser utilizada em todos os tipos de configurações audiométricas, entretanto, os níveis de sensação que identificaram os maiores índices médios de acerto foram: G1- 20 a 30 dBNS, G2- 20 a 50 dBNS; G3- 45 dBNS. O MCL e o UCL-5 dB para a fala não foram eficazes para determinar o IR-Max. Conclusões: O nível de apresentação teve influência no desempenho no reconhecimento de fala para monossílabos em idosos com perda auditiva sensorioneural em todas as configurações audiométricas. A perda auditiva de grau moderado e a configuração audiométrica descendente foram mais frequentes nessa população, seguida da abrupta e horizontal.
Resumo:
Purpose. We aimed to characterize the distribution of the vector parameters ocular residual astigmatism (ORA) and topography disparity (TD) in a sample of clinical and subclinical keratoconus eyes, and to evaluate their diagnostic value to discriminate between these conditions and healthy corneas. Methods. This study comprised a total of 43 keratoconic eyes (27 patients, 17–73 years) (keratoconus group), 11 subclinical keratoconus eyes (eight patients, 11–54 years) (subclinical keratoconus group) and 101 healthy eyes (101 patients, 15–64 years) (control group). In all cases, a complete corneal analysis was performed using a Scheimpflug photography-based topography system. Anterior corneal topographic data was imported from it to the iASSORT software (ASSORT Pty. Ltd), which allowed the calculation of ORA and TD. Results. Mean magnitude of the ORA was 3.23 ± 2.38, 1.16 ± 0.50 and 0.79 ± 0.43 D in the keratoconus, subclinical keratoconus and control groups, respectively (p < 0.001). Mean magnitude of the TD was 9.04 ± 8.08, 2.69 ± 2.42 and 0.89 ± 0.50 D in the keratoconus, subclinical keratoconus and control groups, respectively (p < 0.001). Good diagnostic performance of ORA (cutoff point: 1.21 D, sensitivity 83.7 %, specificity 87.1 %) and TD (cutoff point: 1.64 D, sensitivity 93.3 %, specificity 92.1 %) was found for the detection of keratoconus. The diagnostic ability of these parameters for the detection of subclinical keratoconus was more limited (ORA: cutoff 1.17 D, sensitivity 60.0 %, specificity 84.2 %; TD: cutoff 1.29 D, sensitivity 80.0 %, specificity 80.2 %). Conclusion. The vector parameters ORA and TD are able to discriminate with good levels of precision between keratoconus and healthy corneas. For the detection of subclinical keratoconus, only TD seems to be valid.
Resumo:
Vol. 31 bound with its Proceedings, v. 3 (310.6 In745 v.3).
Resumo:
Includes indexes.
Resumo:
Includes indexes.
Resumo:
Each volume bound in 2 pts., paged continuously. Pt. 2 of each has added t. p. and contents for entire vol.
Resumo:
Includes bibliographical references and indexes.
Resumo:
Jakovljevic considers the concept of theatricality as central to understanding the events that took place in Yugoslavia. He examines the country’s trials, state ceremonies and festivals, army maneuvers, propaganda, and pop culture as “rehearsals and temporary enactments of an ideologically formulated future.”.