857 resultados para Management Misperceptions: An Obstacle to Motivation
Resumo:
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson’s Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson’s patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson’s disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.
Resumo:
Many diseases have a genetic origin, and a great effort is being made to detect the genes that are responsible for their insurgence. One of the most promising techniques is the analysis of genetic information through the use of complex networks theory. Yet, a practical problem of this approach is its computational cost, which scales as the square of the number of features included in the initial dataset. In this paper, we propose the use of an iterative feature selection strategy to identify reduced subsets of relevant features, and show an application to the analysis of congenital Obstructive Nephropathy. Results demonstrate that, besides achieving a drastic reduction of the computational cost, the topologies of the obtained networks still hold all the relevant information, and are thus able to fully characterize the severity of the disease.
Resumo:
E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs
Resumo:
The stepped and excessively slow execution of pseudo-dynamic tests has been found to be the source of some errors arising from strain-rate effect and stress relaxation. In order to control those errors, a new continuous test method which allows the selection of a more suitable time scale factor in the response is proposed in this work. By dimensional analysis, such scaled-time response is obtained theoretically by augmenting the inertial and damping properties of the structure, for which we propose the use of hydraulic pistons which are servo-controlled to produce active mass and damping, nevertheless using an equipment which is similar to that required in a pseudo-dynamic test. The results of the successful implementation of this technique for a simple specimen are shown here.
Resumo:
Aplicación de simulación de Monte Carlo y técnicas de Análisis de la Varianza (ANOVA) a la comparación de modelos estocásticos dinámicos para accidentes de tráfico.
Resumo:
This paper is on homonymous distributed systems where processes are prone to crash failures and have no initial knowledge of the system membership (?homonymous? means that several processes may have the same identi?er). New classes of failure detectors suited to these systems are ?rst de?ned. Among them, the classes H? and H? are introduced that are the homonymous counterparts of the classes ? and ?, respectively. (Recall that the pair h?,?i de?nes the weakest failure detector to solve consensus.) Then, the paper shows how H? and H? can be implemented in homonymous systems without membership knowledge (under different synchrony requirements). Finally, two algorithms are presented that use these failure detectors to solve consensus in homonymous asynchronous systems where there is no initial knowledge ofthe membership. One algorithm solves consensus with hH?, H?i, while the other uses only H?, but needs a majority of correct processes. Observe that the systems with unique identi?ers and anonymous systems are extreme cases of homonymous systems from which follows that all these results also apply to these systems. Interestingly, the new failure detector class H? can be implemented with partial synchrony, while the analogous class A? de?ned for anonymous systems can not be implemented (even in synchronous systems). Hence, the paper provides us with the ?rst proof showing that consensus can be solved in anonymous systems with only partial synchrony (and a majority of correct processes).
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
Several works have been published in the last years concerning the modelling and implementation of the visual cortex operation. Most of them present simple neurons with just two different responses, namely inhibitory and excitatory. Some of the different types of visual cortex cells are simulated in these configurations.
Resumo:
One of the most challenging problems that must be solved by any theoretical model purporting to explain the competence of the human brain for relational tasks is the one related with the analysis and representation of the internal structure in an extended spatial layout of múltiple objects. In this way, some of the problems are related with specific aims as how can we extract and represent spatial relationships among objects, how can we represent the movement of a selected object and so on. The main objective of this paper is the study of some plausible brain structures that can provide answers in these problems. Moreover, in order to achieve a more concrete knowledge, our study will be focused on the response of the retinal layers for optical information processing and how this information can be processed in the first cortex layers. The model to be reported is just a first trial and some major additions are needed to complete the whole vision process.
Resumo:
The need to decarbonize urban mobility is one of the main motivations for all countries to achieve reduction targets for greenhouse gas (GHG) emissions. In general, the transport modes that have experienced the most growth in recent years tend to be the most polluting. Most efforts have focused on improvements in vehicle efficiency and on the renewal of vehicle fleets; more emphasis should be placed on strategies related to the management of urban mobility and modal share. Research of individual travel that analyzes carbon dioxide (CO2) emissions and car and public transport share in daily mobility will enable better assessments of the potential of urban mobility measures introduced to limit GHG emissions produced by transport in cities. The climate change impacts of daily mobility in Spain are explored with data from two national travel surveys in 2000 and 2006, and a method for estimating the CO2 emissions associated with each journey and each surveyed individual is provided. The results demonstrate that from 2000 to 2006, daily mobility has increased and has led to a 17% increase in CO2 emissions. When these results are separated by transport mode, cars prove to be the main contributor to that increase, followed by public transport. More focus should be directed toward modal shift strategies, which take into account not only the number of journeys but also the distance traveled. These contributions have potential applications in the assessment of current and future urban transport policies related to low-carbon urban transportation.
Resumo:
We introduce one trivial but puzzling solar cell structure. It consists of a high bandgap pn junction (top cell) grown on a substrate of lower bandgap. Let us assume, for example, that the bandgap of the top cell is 1.85 eV (Al 0.3Ga 0.7As) and the bandgap of the substrate is 1.42 eV (GaAs). Is the open-circuit of the top cell limited to 1.42 V or to 1.85 V? If the answer is ldquo1.85 Vrdquo we could then make the mind experiment in which we illuminate the cell with 1.5 eV photons (notice these photons would only be absorbed in the substrate). If we admit that these photons can generate photocurrent, then because we have also admitted that the voltage is limited to 1.85 V, it might be possible that the electron-hole pairs generated by these photons were extracted at 1.6 V for example. However, if we do so, the principles of thermodynamics could be violated because we would be extracting more energy from the photon than the energy it initially had. How can we then solve this puzzle?
Resumo:
Swarm robotics is a field of multi-robotics in which large number of robots are coordinated in a distributed and decentralised way. It is based on the use of local rules, and simple robots compared to the complexity of the task to achieve, and inspired by social insects. Large number of simple robots can perform complex tasks in a more efficient way than a single robot, giving robustness and flexibility to the group. In this article, an overview of swarm robotics is given, describing its main properties and characteristics and comparing it to general multi-robotic systems. A review of different research works and experimental results, together with a discussion of the future swarm robotics in real world applications completes this work.
Resumo:
This study suggests a theoretical framework for improving the teaching/ learning process of English employed in the Aeronautical discourse that brings together cognitive learning strategies, Genre Analysis and the Contemporary theory of Metaphor (Lakoff and Johnson 1980; Lakoff 1993). It maintains that cognitive strategies such as imagery, deduction, inference and grouping can be enhanced by means of metaphor and genre awareness in the context of content based approach to language learning. A list of image metaphors and conceptual metaphors which comes from the terminological database METACITEC is provided. The metaphorical terms from the area of Aeronautics have been taken from specialised dictionaries and have been categorised according to the conceptual metaphors they respond to, by establishing the source domains and the target domains, as well as the semantic networks found. This information makes reference to the internal mappings underlying the discourse of aeronautics reflected in five aviation accident case studies which are related to accident reports from the National Transportation Safety Board (NTSB) and provides an important source for designing language teaching tasks. La Lingüística Cognitiva y el Análisis del Género han contribuido a la mejora de la enseñanza de segundas lenguas y, en particular, al desarrollo de la competencia lingüística de los alumnos de inglés para fines específicos. Este trabajo pretende perfeccionar los procesos de enseñanza y el aprendizaje del lenguaje empleado en el discurso aeronáutico por medio de la práctica de estrategias cognitivas y prestando atención a la Teoría del análisis del género y a la Teoría contemporánea de la metáfora (Lakoff y Johnson 1980; Lakoff 1993). Con el propósito de crear recursos didácticos en los que se apliquen estrategias metafóricas, se ha elaborado un listado de metáforas de imagen y de metáforas conceptuales proveniente de la base de datos terminológica META-CITEC. Estos términos se han clasificado de acuerdo con las metáforas conceptuales y de imagen existentes en esta área de conocimiento. Para la enseñanza de este lenguaje de especialidad, se proponen las correspondencias y las proyecciones entre el dominio origen y el dominio meta que se han hallado en los informes de accidentes aéreos tomados de la Junta federal de la Seguridad en el Transporte (NTSB)
Resumo:
Whole brain resting state connectivity is a promising biomarker that might help to obtain an early diagnosis in many neurological diseases, such as dementia. Inferring resting-state connectivity is often based on correlations, which are sensitive to indirect connections, leading to an inaccurate representation of the real backbone of the network. The precision matrix is a better representation for whole brain connectivity, as it considers only direct connections. The network structure can be estimated using the graphical lasso (GL), which achieves sparsity through l1-regularization on the precision matrix. In this paper, we propose a structural connectivity adaptive version of the GL, where weaker anatomical connections are represented as stronger penalties on the corre- sponding functional connections. We applied beamformer source reconstruction to the resting state MEG record- ings of 81 subjects, where 29 were healthy controls, 22 were single-domain amnestic Mild Cognitive Impaired (MCI), and 30 were multiple-domain amnestic MCI. An atlas-based anatomical parcellation of 66 regions was ob- tained for each subject, and time series were assigned to each of the regions. The fiber densities between the re- gions, obtained with deterministic tractography from diffusion-weighted MRI, were used to define the anatomical connectivity. Precision matrices were obtained with the region specific time series in five different frequency bands. We compared our method with the traditional GL and a functional adaptive version of the GL, in terms of log-likelihood and classification accuracies between the three groups. We conclude that introduc- ing an anatomical prior improves the expressivity of the model and, in most cases, leads to a better classification between groups.
Resumo:
Solar radiation estimates with clear sky models require estimations of aerosol data. The low spatial resolution of current aerosol datasets, with their remarkable drift from measured data, poses a problem in solar resource estimation. This paper proposes a new downscaling methodology by combining support vector machines for regression (SVR) and kriging with external drift, with data from the MACC reanalysis datasets and temperature and rainfall measurements from 213 meteorological stations in continental Spain. The SVR technique was proven efficient in aerosol variable modeling. The Linke turbidity factor (TL) and the aerosol optical depth at 550 nm (AOD 550) estimated with SVR generated significantly lower errors in AERONET positions than MACC reanalysis estimates. The TL was estimated with relative mean absolute error (rMAE) of 10.2% (compared with AERONET), against the MACC rMAE of 18.5%. A similar behavior was seen with AOD 550, estimated with rMAE of 8.6% (compared with AERONET), against the MACC rMAE of 65.6%. Kriging using MACC data as an external drift was found useful in generating high resolution maps (0.05° × 0.05°) of both aerosol variables. We created high resolution maps of aerosol variables in continental Spain for the year 2008. The proposed methodology was proven to be a valuable tool to create high resolution maps of aerosol variables (TL and AOD 550). This methodology shows meaningful improvements when compared with estimated available databases and therefore, leads to more accurate solar resource estimations. This methodology could also be applied to the prediction of other atmospheric variables, whose datasets are of low resolution.