869 resultados para Computer Experiment
Resumo:
Los accidentes del tráfico son un fenómeno social muy relevantes y una de las principales causas de mortalidad en los países desarrollados. Para entender este fenómeno complejo se aplican modelos econométricos sofisticados tanto en la literatura académica como por las administraciones públicas. Esta tesis está dedicada al análisis de modelos macroscópicos para los accidentes del tráfico en España. El objetivo de esta tesis se puede dividir en dos bloques: a. Obtener una mejor comprensión del fenómeno de accidentes de trafico mediante la aplicación y comparación de dos modelos macroscópicos utilizados frecuentemente en este área: DRAG y UCM, con la aplicación a los accidentes con implicación de furgonetas en España durante el período 2000-2009. Los análisis se llevaron a cabo con enfoque frecuencista y mediante los programas TRIO, SAS y TRAMO/SEATS. b. La aplicación de modelos y la selección de las variables más relevantes, son temas actuales de investigación y en esta tesis se ha desarrollado y aplicado una metodología que pretende mejorar, mediante herramientas teóricas y prácticas, el entendimiento de selección y comparación de los modelos macroscópicos. Se han desarrollado metodologías tanto para selección como para comparación de modelos. La metodología de selección de modelos se ha aplicado a los accidentes mortales ocurridos en la red viaria en el período 2000-2011, y la propuesta metodológica de comparación de modelos macroscópicos se ha aplicado a la frecuencia y la severidad de los accidentes con implicación de furgonetas en el período 2000-2009. Como resultado de los desarrollos anteriores se resaltan las siguientes contribuciones: a. Profundización de los modelos a través de interpretación de las variables respuesta y poder de predicción de los modelos. El conocimiento sobre el comportamiento de los accidentes con implicación de furgonetas se ha ampliado en este proceso. bl. Desarrollo de una metodología para selección de variables relevantes para la explicación de la ocurrencia de accidentes de tráfico. Teniendo en cuenta los resultados de a) la propuesta metodológica se basa en los modelos DRAG, cuyos parámetros se han estimado con enfoque bayesiano y se han aplicado a los datos de accidentes mortales entre los años 2000-2011 en España. Esta metodología novedosa y original se ha comparado con modelos de regresión dinámica (DR), que son los modelos más comunes para el trabajo con procesos estocásticos. Los resultados son comparables, y con la nueva propuesta se realiza una aportación metodológica que optimiza el proceso de selección de modelos, con escaso coste computacional. b2. En la tesis se ha diseñado una metodología de comparación teórica entre los modelos competidores mediante la aplicación conjunta de simulación Monte Cario, diseño de experimentos y análisis de la varianza ANOVA. Los modelos competidores tienen diferentes estructuras, que afectan a la estimación de efectos de las variables explicativas. Teniendo en cuenta el estudio desarrollado en bl) este desarrollo tiene el propósito de determinar como interpretar la componente de tendencia estocástica que un modelo UCM modela explícitamente, a través de un modelo DRAG, que no tiene un método específico para modelar este elemento. Los resultados de este estudio son importantes para ver si la serie necesita ser diferenciada antes de modelar. b3. Se han desarrollado nuevos algoritmos para realizar los ejercicios metodológicos, implementados en diferentes programas como R, WinBUGS, y MATLAB. El cumplimiento de los objetivos de la tesis a través de los desarrollos antes enunciados se remarcan en las siguientes conclusiones: 1. El fenómeno de accidentes del tráfico se ha analizado mediante dos modelos macroscópicos. Los efectos de los factores de influencia son diferentes dependiendo de la metodología aplicada. Los resultados de predicción son similares aunque con ligera superioridad de la metodología DRAG. 2. La metodología para selección de variables y modelos proporciona resultados prácticos en cuanto a la explicación de los accidentes de tráfico. La predicción y la interpretación también se han mejorado mediante esta nueva metodología. 3. Se ha implementado una metodología para profundizar en el conocimiento de la relación entre las estimaciones de los efectos de dos modelos competidores como DRAG y UCM. Un aspecto muy importante en este tema es la interpretación de la tendencia mediante dos modelos diferentes de la que se ha obtenido información muy útil para los investigadores en el campo del modelado. Los resultados han proporcionado una ampliación satisfactoria del conocimiento en torno al proceso de modelado y comprensión de los accidentes con implicación de furgonetas y accidentes mortales totales en España. ABSTRACT Road accidents are a very relevant social phenomenon and one of the main causes of death in industrialized countries. Sophisticated econometric models are applied in academic work and by the administrations for a better understanding of this very complex phenomenon. This thesis is thus devoted to the analysis of macro models for road accidents with application to the Spanish case. The objectives of the thesis may be divided in two blocks: a. To achieve a better understanding of the road accident phenomenon by means of the application and comparison of two of the most frequently used macro modelings: DRAG (demand for road use, accidents and their gravity) and UCM (unobserved components model); the application was made to van involved accident data in Spain in the period 2000-2009. The analysis has been carried out within the frequentist framework and using available state of the art software, TRIO, SAS and TRAMO/SEATS. b. Concern on the application of the models and on the relevant input variables to be included in the model has driven the research to try to improve, by theoretical and practical means, the understanding on methodological choice and model selection procedures. The theoretical developments have been applied to fatal accidents during the period 2000-2011 and van-involved road accidents in 2000-2009. This has resulted in the following contributions: a. Insight on the models has been gained through interpretation of the effect of the input variables on the response and prediction accuracy of both models. The behavior of van-involved road accidents has been explained during this process. b1. Development of an input variable selection procedure, which is crucial for an efficient choice of the inputs. Following the results of a) the procedure uses the DRAG-like model. The estimation is carried out within the Bayesian framework. The procedure has been applied for the total road accident data in Spain in the period 2000-2011. The results of the model selection procedure are compared and validated through a dynamic regression model given that the original data has a stochastic trend. b2. A methodology for theoretical comparison between the two models through Monte Carlo simulation, computer experiment design and ANOVA. The models have a different structure and this affects the estimation of the effects of the input variables. The comparison is thus carried out in terms of the effect of the input variables on the response, which is in general different, and should be related. Considering the results of the study carried out in b1) this study tries to find out how a stochastic time trend will be captured in DRAG model, since there is no specific trend component in DRAG. Given the results of b1) the findings of this study are crucial in order to see if the estimation of data with stochastic component through DRAG will be valid or whether the data need a certain adjustment (typically differencing) prior to the estimation. The model comparison methodology was applied to the UCM and DRAG models, considering that, as mentioned above, the UCM has a specific trend term while DRAG does not. b3. New algorithms were developed for carrying out the methodological exercises. For this purpose different softwares, R, WinBUGs and MATLAB were used. These objectives and contributions have been resulted in the following findings: 1. The road accident phenomenon has been analyzed by means of two macro models: The effects of the influential input variables may be estimated through the models, but it has been observed that the estimates vary from one model to the other, although prediction accuracy is similar, with a slight superiority of the DRAG methodology. 2. The variable selection methodology provides very practical results, as far as the explanation of road accidents is concerned. Prediction accuracy and interpretability have been improved by means of a more efficient input variable and model selection procedure. 3. Insight has been gained on the relationship between the estimates of the effects using the two models. A very relevant issue here is the role of trend in both models, relevant recommendations for the analyst have resulted from here. The results have provided a very satisfactory insight into both modeling aspects and the understanding of both van-involved and total fatal accidents behavior in Spain.
Resumo:
The problem of sequent two-block decomposition of a Boolean function is regarded in case when a good solution does exist. The problem consists mainly in finding an appropriate weak partition on the set of arguments of the considered Boolean function, which should be decomposable at that partition. A new fast heuristic combinatorial algorithm is offered for solving this task. At first the randomized search for traces of such a partition is fulfilled. The recognized traces are represented by some "triads" - the simplest weak partitions corresponding to non-trivial decompositions. After that the whole sought-for partition is restored from the discovered trace by building a track initialized by the trace and leading to the solution. The results of computer experiments testify the high practical efficiency of the algorithm.
Resumo:
Objectives To evaluate the presence of false flow three-dimensional (3D) power Doppler signals in `flow-free` models. Methods 3D power Doppler datasets were acquired from three different flow-free phantoms (muscle, air and water) with two different transducers and Virtual Organ Computer-aided AnaLysis was used to generate a sphere that was serially applied through the 3D dataset. The vascularization flow index was used to compare artifactual signals at different depths (from 0 to 6 cm) within the different phantoms and at different gain and pulse repetition frequency (PR F) settings. Results Artifactual Doppler signals were seen in all phantoms despite these being flow-free. The pattern was very similar and the degree of artifact appeared to be dependent on the gain and distance from the transducer. False signals were more evident in the far field and increased as the gain was increased, with false signals first appearing with a gain of 1 dB in the air and muscle phantoms. False signals were seen at a lower gain with the water phantom (-15 dB) and these were associated with vertical lines of Doppler artifact that were related to PRF, and disappeared when reflections were attenuated. Conclusions Artifactual Doppler signals are seen in flow-free phantoms and are related to the gain settings and the distance from the transducer. In the in-vivo situation, the lowest gain settings that allow the detection of blood flow and adequate definition of vessel architecture should be used, which invariably means using a setting near or below the middle of the range available. Additionally, observers should be aware of vertical lines when evaluating cystic or liquid-containing structures. Copyright (C) 2010 ISUOC. Published by John Wiley & Sons, Ltd.
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
Currently, the teaching-learning process in domains, such as computer programming, is characterized by an extensive curricula and a high enrolment of students. This poses a great workload for faculty and teaching assistants responsible for the creation, delivery, and assessment of student exercises. The main goal of this chapter is to foster practice-based learning in complex domains. This objective is attained with an e-learning framework—called Ensemble—as a conceptual tool to organize and facilitate technical interoperability among services. The Ensemble framework is used on a specific domain: computer programming. Content issues are tacked with a standard format to describe programming exercises as learning objects. Communication is achieved with the extension of existing specifications for the interoperation with several systems typically found in an e-learning environment. In order to evaluate the acceptability of the proposed solution, an Ensemble instance was validated on a classroom experiment with encouraging results.
Resumo:
Carotenoid-based yellowish to red plumage colors are widespread visual signals used in sexual and social communication. To understand their ultimate signaling functions, it is important to identify the proximate mechanism promoting variation in coloration. Carotenoid-based colors combine structural and pigmentary components, but the importance of the contribution of structural components to variation in pigment-based colors (i.e., carotenoid-based colors) has been undervalued. In a field experiment with great tits (Parus major), we combined a brood size manipulation with a simultaneous carotenoid supplementation in order to disentangle the effects of carotenoid availability and early growth condition on different components of the yellow breast feathers. By defining independent measures of feather carotenoid content (absolute carotenoid chroma) and background structure (background reflectance), we demonstrate that environmental factors experienced during the nestling period, namely, early growth conditions and carotenoid availability, contribute independently to variation in yellow plumage coloration. While early growth conditions affected the background reflectance of the plumage, the availability of carotenoids affected the absolute carotenoid chroma, the peak of maximum ultraviolet reflectance, and the overall shape, that is, chromatic information of the reflectance curves. These findings demonstrate that environment-induced variation in background structure contributes significantly to intraspecific variation in yellow carotenoid-based plumage coloration.
Resumo:
In recent years, Business Model Canvas design has evolved from being a paper-based activity to one that involves the use of dedicated computer-aided business model design tools. We propose a set of guidelines to help design more coherent business models. When combined with functionalities offered by CAD tools, they show great potential to improve business model design as an ongoing activity. However, in order to create complex solutions, it is necessary to compare basic business model design tasks, using a CAD system over its paper-based counterpart. To this end, we carried out an experiment to measure user perceptions of both solutions. Performance was evaluated by applying our guidelines to both solutions and then carrying out a comparison of business model designs. Although CAD did not outperform paper-based design, the results are very encouraging for the future of computer-aided business model design.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
La fibrillation auriculaire, l'arythmie la plus fréquente en clinique, affecte 2.3 millions de patients en Amérique du Nord. Pour en étudier les mécanismes et les thérapies potentielles, des modèles animaux de fibrillation auriculaire ont été développés. La cartographie électrique épicardique à haute densité est une technique expérimentale bien établie pour suivre in vivo l'activité des oreillettes en réponse à une stimulation électrique, à du remodelage, à des arythmies ou à une modulation du système nerveux autonome. Dans les régions qui ne sont pas accessibles par cartographie épicardique, la cartographie endocardique sans contact réalisée à l'aide d'un cathéter en forme de ballon pourrait apporter une description plus complète de l'activité auriculaire. Dans cette étude, une expérience chez le chien a été conçue et analysée. Une reconstruction électro-anatomique, une cartographie épicardique (103 électrodes), une cartographie endocardique sans contact (2048 électrodes virtuelles calculées à partir un cathéter en forme de ballon avec 64 canaux) et des enregistrements endocardiques avec contact direct ont été réalisés simultanément. Les systèmes d'enregistrement ont été également simulés dans un modèle mathématique d'une oreillette droite de chien. Dans les simulations et les expériences (après la suppression du nœud atrio-ventriculaire), des cartes d'activation ont été calculées pendant le rythme sinusal. La repolarisation a été évaluée en mesurant l'aire sous l'onde T auriculaire (ATa) qui est un marqueur de gradient de repolarisation. Les résultats montrent un coefficient de corrélation épicardique-endocardique de 0.8 (expérience) and 0.96 (simulation) entre les cartes d'activation, et un coefficient de corrélation de 0.57 (expérience) and 0.92 (simulation) entre les valeurs de ATa. La cartographie endocardique sans contact apparait comme un instrument expérimental utile pour extraire de l'information en dehors des régions couvertes par les plaques d'enregistrement épicardique.
Resumo:
Este proyecto de investigación busca usar un sistema de cómputo basado en modelación por agentes para medir la percepción de marca de una organización en una población heterogénea. Se espera proporcionar información que permita dar soluciones a una organización acerca del comportamiento de sus consumidores y la asociada percepción de marca. El propósito de este sistema es el de modelar el proceso de percepción-razonamiento-acción para simular un proceso de razonamiento como el resultado de una acumulación de percepciones que resultan en las acciones del consumidor. Este resultado definirá la aceptación de marca o el rechazo del consumidor hacia la empresa. Se realizó un proceso de recolección información acerca de una organización específica en el campo de marketing. Después de compilar y procesar la información obtenida de la empresa, el análisis de la percepción de marca es aplicado mediante procesos de simulación. Los resultados del experimento son emitidos a la organización mediante un informe basado en conclusiones y recomendaciones a nivel de marketing para mejorar la percepción de marca por parte de los consumidores.
Resumo:
Myoglobin has been studied in considerable detail using different experimental and computational techniques over the past decades. Recent developments in time-resolved spectroscopy have provided experimental data amenable to detailed atomistic simulations. The main theme of the present review are results on the structures, energetics and dynamics of ligands ( CO, NO) interacting with myoglobin from computer simulations. Modern computational methods including free energy simulations, mixed quantum mechanics/molecular mechanics simulations, and reactive molecular dynamics simulations provide insight into the dynamics of ligand dynamics in confined spaces complementary to experiment. Application of these methods to calculate and understand experimental observations for myoglobin interacting with CO and NO are presented and discussed.
Resumo:
Purpose – Investors are now able to analyse more noise-free news to inform their trading decisions than ever before. Their expectation that more information means better performance is not supported by previous psychological experiments which argue that too much information actually impairs performance. The purpose of this paper is to examine whether the degree of information explicitness improves stock market performance. Design/methodology/approach – An experiment is conducted in a computer laboratory to examine a trading simulation manipulated from a real market-shock. Participants’ performance efficiency and effectiveness are measured separately. Findings – The results indicate that the explicitness of information neither improves nor impairs participants’ performance effectiveness from the perspectives of returns, share and cash positions, and trading volumes. However, participants’ performance efficiency is significantly affected by information explicitness. Originality/value – The novel approach and findings of this research add to the knowledge of the impact of information explicitness on the quality of decision making in a financial market environment.