884 resultados para Prediction error method
Resumo:
We present and describe a catalog of galaxy photometric redshifts (photo-z) for the Sloan Digital Sky Survey (SDSS) Co-add Data. We use the artificial neural network (ANN) technique to calculate the photo-z and the nearest neighbor error method to estimate photo-z errors for similar to 13 million objects classified as galaxies in the co-add with r < 24.5. The photo-z and photo-z error estimators are trained and validated on a sample of similar to 83,000 galaxies that have SDSS photometry and spectroscopic redshifts measured by the SDSS Data Release 7 (DR7), the Canadian Network for Observational Cosmology Field Galaxy Survey, the Deep Extragalactic Evolutionary Probe Data Release 3, the VIsible imaging Multi-Object Spectrograph-Very Large Telescope Deep Survey, and the WiggleZ Dark Energy Survey. For the best ANN methods we have tried, we find that 68% of the galaxies in the validation set have a photo-z error smaller than sigma(68) = 0.031. After presenting our results and quality tests, we provide a short guide for users accessing the public data.
Resumo:
INTRODUCTION: The accurate evaluation of error of measurement (EM) is extremely important as in growth studies as in clinical research, since there are usually quantitatively small changes. In any study it is important to evaluate the EM to validate the results and, consequently, the conclusions. Because of its extreme simplicity, the Dahlberg formula is largely used worldwide, mainly in cephalometric studies. OBJECTIVES: (I) To elucidate the formula proposed by Dahlberg in 1940, evaluating it by comparison with linear regression analysis; (II) To propose a simple methodology to analyze the results, which provides statistical elements to assist researchers in obtaining a consistent evaluation of the EM. METHODS: We applied linear regression analysis, hypothesis tests on its parameters and a formula involving the standard deviation of error of measurement and the measured values. RESULTS AND CONCLUSION: we introduced an error coefficient, which is a proportion related to the scale of observed values. This provides new parameters to facilitate the evaluation of the impact of random errors in the research final results.
Resumo:
To derive tests for randomness, nonlinear-independence, and stationarity, we combine surrogates with a nonlinear prediction error, a nonlinear interdependence measure, and linear variability measures, respectively. We apply these tests to intracranial electroencephalographic recordings (EEG) from patients suffering from pharmacoresistant focal-onset epilepsy. These recordings had been performed prior to and independent from our study as part of the epilepsy diagnostics. The clinical purpose of these recordings was to delineate the brain areas to be surgically removed in each individual patient in order to achieve seizure control. This allowed us to define two distinct sets of signals: One set of signals recorded from brain areas where the first ictal EEG signal changes were detected as judged by expert visual inspection ("focal signals") and one set of signals recorded from brain areas that were not involved at seizure onset ("nonfocal signals"). We find more rejections for both the randomness and the nonlinear-independence test for focal versus nonfocal signals. In contrast more rejections of the stationarity test are found for nonfocal signals. Furthermore, while for nonfocal signals the rejection of the stationarity test increases the rejection probability of the randomness and nonlinear-independence test substantially, we find a much weaker influence for the focal signals. In consequence, the contrast between the focal and nonfocal signals obtained from the randomness and nonlinear-independence test is further enhanced when we exclude signals for which the stationarity test is rejected. To study the dependence between the randomness and nonlinear-independence test we include only focal signals for which the stationarity test is not rejected. We show that the rejection of these two tests correlates across signals. The rejection of either test is, however, neither necessary nor sufficient for the rejection of the other test. Thus, our results suggest that EEG signals from epileptogenic brain areas are less random, more nonlinear-dependent, and more stationary compared to signals recorded from nonepileptogenic brain areas. We provide the data, source code, and detailed results in the public domain.
Resumo:
BACKGROUND: The Anesthetic Conserving Device (AnaConDa) uncouples delivery of a volatile anesthetic (VA) from fresh gas flow (FGF) using a continuous infusion of liquid volatile into a modified heat-moisture exchanger capable of adsorbing VA during expiration and releasing adsorbed VA during inspiration. It combines the simplicity and responsiveness of high FGF with low agent expenditures. We performed in vitro characterization of the device before developing a population pharmacokinetic model for sevoflurane administration with the AnaConDa, and retrospectively testing its performance (internal validation). MATERIALS AND METHODS: Eighteen females and 20 males, aged 31-87, BMI 20-38, were included. The end-tidal concentrations were varied and recorded together with the VA infusion rates into the device, ventilation and demographic data. The concentration-time course of sevoflurane was described using linear differential equations, and the most suitable structural model and typical parameter values were identified. The individual pharmacokinetic parameters were obtained and tested for covariate relationships. Prediction errors were calculated. RESULTS: In vitro studies assessed the contribution of the device to the pharmacokinetic model. In vivo, the sevoflurane concentration-time courses on the patient side of the AnaConDa were adequately described with a two-compartment model. The population median absolute prediction error was 27% (interquartile range 13-45%). CONCLUSION: The predictive performance of the two-compartment model was similar to that of models accepted for TCI administration of intravenous anesthetics, supporting open-loop administration of sevoflurane with the AnaConDa. Further studies will focus on prospective testing and external validation of the model implemented in a target-controlled infusion device.
Resumo:
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
Resumo:
Individual risk preferences have a large influence on decisions, such as financial investments, career and health choices, or gambling. Decision making under risk has been studied both behaviorally and on a neural level. It remains unclear, however, how risk attitudes are encoded and integrated with choice. Here, we investigate how risk preferences are reflected in neural regions known to process risk. We collected functional magnetic resonance images of 56 human subjects during a gambling task (Preuschoff et al., 2006). Subjects were grouped into risk averters and risk seekers according to the risk preferences they revealed in a separate lottery task. We found that during the anticipation of high-risk gambles, risk averters show stronger responses in ventral striatum and anterior insula compared to risk seekers. In addition, risk prediction error signals in anterior insula, inferior frontal gyrus, and anterior cingulate indicate that risk averters do not dissociate properly between gambles that are more or less risky than expected. We suggest this may result in a general overestimation of prospective risk and lead to risk avoidance behavior. This is the first study to show that behavioral risk preferences are reflected in the passive evaluation of risky situations. The results have implications on public policies in the financial and health domain.
Resumo:
Las misiones jesuitas en el espacio de la cuenca del Plata son bastante conocidas en la historiográfica nacional. Desde la relación geográfica de Azara hasta los últimos aportes de Di Stefano y Zanatta en su "Historia de la Iglesia Argentina ", la historia de los "30 pueblos" en la provincia argentina de Misiones ha dado mucho que hablar -y escribir- a lo largo de la historia nacional. Esto es perceptible tanto en las imágenes idílicas de comunión entre jesuitas y guaraníes hasta los trabajos que develan los complejos sistemas de alianzas que posibilitaban la aparente sumisión guaraní al dominio español. Pero todas estas imágenes deben considerarse como el estado final de un proceso que -con sus marchas y contramarchas- se inicia a principios del siglo XVII, cuando los jesuitas comienzan su tarea evangelizadora con los indígenas. Será nuestra intención trabajar las dos primeras décadas de la fundación y funcionamiento de las misiones jesuitas en el Guayrá (1609-1629), momento en el cual van tomando forma las estrategias y estructuras de dominación, cuando la metodología de la prueba y error es moneda corriente, y los enfrentamientos entre indígenas, jesuitas, españoles y portugueses tienen lugar en el marco de una frontera tan inestable como era en ese momento la región guayrense.
Resumo:
Las misiones jesuitas en el espacio de la cuenca del Plata son bastante conocidas en la historiográfica nacional. Desde la relación geográfica de Azara hasta los últimos aportes de Di Stefano y Zanatta en su "Historia de la Iglesia Argentina ", la historia de los "30 pueblos" en la provincia argentina de Misiones ha dado mucho que hablar -y escribir- a lo largo de la historia nacional. Esto es perceptible tanto en las imágenes idílicas de comunión entre jesuitas y guaraníes hasta los trabajos que develan los complejos sistemas de alianzas que posibilitaban la aparente sumisión guaraní al dominio español. Pero todas estas imágenes deben considerarse como el estado final de un proceso que -con sus marchas y contramarchas- se inicia a principios del siglo XVII, cuando los jesuitas comienzan su tarea evangelizadora con los indígenas. Será nuestra intención trabajar las dos primeras décadas de la fundación y funcionamiento de las misiones jesuitas en el Guayrá (1609-1629), momento en el cual van tomando forma las estrategias y estructuras de dominación, cuando la metodología de la prueba y error es moneda corriente, y los enfrentamientos entre indígenas, jesuitas, españoles y portugueses tienen lugar en el marco de una frontera tan inestable como era en ese momento la región guayrense.
Resumo:
Las misiones jesuitas en el espacio de la cuenca del Plata son bastante conocidas en la historiográfica nacional. Desde la relación geográfica de Azara hasta los últimos aportes de Di Stefano y Zanatta en su "Historia de la Iglesia Argentina ", la historia de los "30 pueblos" en la provincia argentina de Misiones ha dado mucho que hablar -y escribir- a lo largo de la historia nacional. Esto es perceptible tanto en las imágenes idílicas de comunión entre jesuitas y guaraníes hasta los trabajos que develan los complejos sistemas de alianzas que posibilitaban la aparente sumisión guaraní al dominio español. Pero todas estas imágenes deben considerarse como el estado final de un proceso que -con sus marchas y contramarchas- se inicia a principios del siglo XVII, cuando los jesuitas comienzan su tarea evangelizadora con los indígenas. Será nuestra intención trabajar las dos primeras décadas de la fundación y funcionamiento de las misiones jesuitas en el Guayrá (1609-1629), momento en el cual van tomando forma las estrategias y estructuras de dominación, cuando la metodología de la prueba y error es moneda corriente, y los enfrentamientos entre indígenas, jesuitas, españoles y portugueses tienen lugar en el marco de una frontera tan inestable como era en ese momento la región guayrense.
Resumo:
We report down-core sedimentary Nd isotope (epsilon Nd) records from two South Atlantic sediment cores, MD02-2594 and GeoB3603-2, located on the western South African continental margin. The core sites are positioned downstream of the present-day flow path of North Atlantic Deep Water (NADW) and close to the Southern Ocean, which makes them suitable for reconstructing past variability in NADW circulation over the last glacial cycle. The Fe-Mn leachates epsilon Nd records show a coherent decreasing trend from glacial radiogenic values towards less radiogenic values during the Holocene. This trend is confirmed by epsilon Nd in fish debris and mixed planktonic foraminifera, albeit with an offset during the Holocene to lower values relative to the leachates, matching the present-day composition of NADW in the Cape Basin. We interpret the epsilon Nd changes as reflecting the glacial shoaling of Southern Ocean waters to shallower depths combined with the admixing of southward flowing Northern Component Water (NCW). A compilation of Atlantic epsilon Nd records reveals increasing radiogenic isotope signatures towards the south and with increasing depth. This signal is most prominent during the Last Glacial Maximum (LGM) and of similar amplitude across the Atlantic basin, suggesting continuous deep water production in the North Atlantic and export to the South Atlantic and the Southern Ocean. The amplitude of the epsilon Nd change from the LGM to Holocene is largest in the southernmost cores, implying a greater sensitivity to the deglacial strengthening of NADW at these sites. This signal impacted most prominently the South Atlantic deep and bottom water layers that were particularly deprived of NCW during the LGM. The epsilon Nd variations correlate with changes in 231Pa/230Th ratios and benthic d13C across the deglacial transition. Together with the contrasting 231Pa/230Th: epsilon Nd pattern of the North and South Atlantic, this indicates a progressive reorganization of the AMOC to full strength during the Holocene.
Resumo:
Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a “signature” from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.
Resumo:
Este proyecto se incluye en una línea de trabajo que tiene como objetivo final la optimización de la energía consumida por un dispositivo portátil multimedia mediante la aplicación de técnicas de control realimentado, a partir de una modificación dinámica de la frecuencia de trabajo del procesador y de su tensión de alimentación. La modificación de frecuencia y tensión se realiza a partir de la información de realimentación acerca de la potencia consumida por el dispositivo, lo que supone un problema ya que no suele ser posible la monitorización del consumo de potencia en dispositivos de estas características. Este es el motivo por el que se recurre a la estimación del consumo de potencia, utilizando para ello un modelo de predicción. A partir del número de veces que se producen ciertos eventos en el procesador del dispositivo, el modelo de predicción es capaz de obtener una estimación de la potencia consumida por dicho dispositivo. El trabajo llevado a cabo en este proyecto se centra en la implementación de un modelo de estimación de potencia en el kernel de Linux. La razón por la que la estimación se implementa en el sistema operativo es, en primer lugar para lograr un acceso directo a los contadores del procesador. En segundo lugar, para facilitar la modificación de frecuencia y tensión, una vez obtenida la estimación de potencia, ya que esta también se realiza desde el sistema operativo. Otro motivo para implementar la estimación en el sistema operativo, es que la estimación debe ser independiente de las aplicaciones de usuario. Además, el proceso de estimación se realiza de forma periódica, lo que sería difícil de lograr si no se trabajase desde el sistema operativo. Es imprescindible que la estimación se haga de forma periódica ya que al ser dinámica la modificación de frecuencia y tensión que se pretende implementar, se necesita conocer el consumo de potencia del dispositivo en todo momento. Cabe destacar también, que los algoritmos de control se tienen que diseñar sobre un patrón periódico de actuación. El modelo de estimación de potencia funciona de manera específica para el perfil de consumo generado por una única aplicación determinada, que en este caso es un decodificador de vídeo. Sin embargo, es necesario que funcione de la forma más precisa posible para cada una de las frecuencias de trabajo del procesador, y para el mayor número posible de secuencias de vídeo. Esto es debido a que las sucesivas estimaciones de potencia se pretenden utilizar para llevar a cabo la modificación dinámica de frecuencia, por lo que el modelo debe ser capaz de continuar realizando las estimaciones independientemente de la frecuencia con la que esté trabajando el dispositivo. Para valorar la precisión del modelo de estimación se toman medidas de la potencia consumida por el dispositivo a las distintas frecuencias de trabajo durante la ejecución del decodificador de vídeo. Estas medidas se comparan con las estimaciones de potencia obtenidas durante esas mismas ejecuciones, obteniendo de esta forma el error de predicción cometido por el modelo y realizando las modificaciones y ajustes oportunos en el mismo. ABSTRACT. This project is included in a work line which tries to optimize consumption of handheld multimedia devices by the application of feedback control techniques, from a dynamic modification of the processor work frequency and its voltage. The frequency and voltage modification is performed depending on the feedback information about the device power consumption. This is a problem because normally it is not possible to monitor the power consumption on this kind of devices. This is the reason why a power consumption estimation is used instead, which is obtained from a prediction model. Using the number of times some events occur on the device processor, the prediction model is able to obtain a power consumption estimation of this device. The work done in this project focuses on the implementation of a power estimation model in the Linux kernel. The main reason to implement the estimation in the operating system is to achieve a direct access to the processor counters. The second reason is to facilitate the frequency and voltage modification, because this modification is also done from the operating system. Another reason to implement the estimation in the operating system is because the estimation must be done apart of the user applications. Moreover, the estimation process is done periodically, what is difficult to obtain outside the operating system. It is necessary to make the estimation in a periodic way because the frequency and voltage modification is going to be dynamic, so it needs to know the device power consumption at every time. Also, it is important to say that the control algorithms have to be designed over a periodic pattern of action. The power estimation model works specifically for the consumption profile generated by a single application, which in this case is a video decoder. Nevertheless, it is necessary that the model works as accurate as possible for each frequency available on the processor, and for the greatest number of video sequences. This is because the power estimations are going to be used to modify dynamically the frequency, so the model must be able to work independently of the device frequency. To value the estimation model precision, some measurements of the device power consumption are taken at different frequencies during the video decoder execution. These measurements are compared with the power estimations obtained during that execution, getting the prediction error committed by the model, and if it is necessary, making modifications and settings on this model.
Resumo:
The “trial and error” method is fundamental for Master Minddecision algorithms. On the basis of Master Mind games and strategies weconsider some data mining methods for tests using students as teachers.Voting, twins, opposite, simulate and observer methods are investigated.For a pure data base these combinatorial algorithms are faster then manyAI and Master Mind methods. The complexities of these algorithms arecompared with basic combinatorial methods in AI. ACM Computing Classification System (1998): F.3.2, G.2.1, H.2.1, H.2.8, I.2.6.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.