878 resultados para Data-Information-Knowledge Chain
Resumo:
This paper describes a system for the computer understanding of English. The system answers questions, executes commands, and accepts information in normal English dialog. It uses semantic information and context to understand discourse and to disambiguate sentences. It combines a complete syntactic analysis of each sentence with a "heuristic understander" which uses different kinds of information about a sentence, other parts of the discourse, and general information about the world in deciding what the sentence means. It is based on the belief that a computer cannot deal reasonably with language unless it can "understand" the subject it is discussing. The program is given a detailed model of the knowledge needed by a simple robot having only a hand and an eye. We can give it instructions to manipulate toy objects, interrogate it about the scene, and give it information it will use in deduction. In addition to knowing the properties of toy objects, the program has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carry them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, and asking for clarification when its heuristic programs cannot understand a sentence through use of context and physical knowledge.
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced
Resumo:
This article presents recent WMR (wheeled mobile robot) navigation experiences using local perception knowledge provided by monocular and odometer systems. A local narrow perception horizon is used to plan safety trajectories towards the objective. Therefore, monocular data are proposed as a way to obtain real time local information by building two dimensional occupancy grids through a time integration of the frames. The path planning is accomplished by using attraction potential fields, while the trajectory tracking is performed by using model predictive control techniques. The results are faced to indoor situations by using the lab available platform consisting in a differential driven mobile robot
Resumo:
I test the presence of hidden information and action in the automobile insurance market using a data set from several Colombian insurers. To identify the presence of hidden information I find a common knowledge variable providing information on policyholder s risk type which is related to both experienced risk and insurance demand and that was excluded from the pricing mechanism. Such unused variable is the record of policyholder s traffic offenses. I find evidence of adverse selection in six of the nine insurance companies for which the test is performed. From the point of view of hidden action I develop a dynamic model of effort in accident prevention given an insurance contract with bonus experience rating scheme and I show that individual accident probability decreases with previous accidents. This result brings a testable implication for the empirical identification of hidden action and based on that result I estimate an econometric model of the time spans between the purchase of the insurance and the first claim, between the first claim and the second one, and so on. I find strong evidence on the existence of unobserved heterogeneity that deceives the testable implication. Once the unobserved heterogeneity is controlled, I find conclusive statistical grounds supporting the presence of moral hazard in the Colombian insurance market.
Resumo:
List of topics and Slides which summarise legal perspectives with suggested methods on how to revise for the exam
Resumo:
Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.
Resumo:
This research emerges from the world-wide problematic concerning student's failure. It particularly analyzes the meta-cognitive competences in the writing process of this population. Based on Flavell's (1992) viewpoint about meta-cognition and the socio-cognitive approach of self-regulation, two variables were measured: meta-cognitive knowledge and self-regulation strategies. A qualitative study was conducted on a sample of 12 French students at first year university. This study uses a specific technique of interview known as "explicitation interview". The data analysis included the categorization, codification and quantification of the information obtained with the interviews. In conclusion, even though the students had metacognitive knowledge related to the written tasks, they did not show strategies that could help to go beyond the descriptive modality of written discourses by taking into account the readers' expectations. Their writing processes focused on transcription of ideas with little control on the planning and revision phases.
Resumo:
Introduction: the statistical record used in the Field Academic Programs (PAC for it’s initials in Spanish) of Rehabilitation denotes generalities in the data conceptualization, which complicates the reliable guidance in making decisions and provides a low support for research in rehabilitation and disability. In response, the Research Group in Rehabilitation and Social Integration of Persons with Disabilities has worked on the creation of a registry to characterize the population seen by Rehabilitation PAC. This registry includes the use of the International Classification of Functioning, Disability and Health (ICF) of the WHO. Methodology: the proposed methodology includes two phases: the first one is a descriptive study and the second one involves performing methodology Methontology, which integrates the identification and development of ontology knowledge. This article contextualizes the progress made in the second phase. Results: the development of the registry in 2008, as an information system, included documentary review and the analysis of possible use scenarios to help guide the design and development of the SIDUR system. The system uses the ICF given that it is a terminology standardization that allows the reduction of ambiguity and that makes easier the transformation of health facts into data translatable to information systems. The record raises three categories and a total of 129 variables Conclusions: SIDUR facilitates accessibility to accurate and updated information, useful for decision making and research.
Resumo:
Aim: To review the current knowledge about suicide in cancer patients. Method: We searchedspecialized databases using keywords for articles published in the last two decades (1990-2010),and compiled and reviewed them in order to: indicate the prevalence of suicide in cancer patientsworldwide and in Colombia, differentiating the data by sex and age; establish the types of cancerthat are associated with suicide, identify risk factors for committing or considering suicide andpresent the strategies of professional and psychological intervention directed at cancer patientswith suicidal ideation and suicide attempts. The present article is a review of the information on thesubject. Results: We found that: in cancer patients, the suicide rate is two times higher thanin the general population; depression, suicidal ideation and location of cancer are some of therisk factors for suicide, and there is a lack of published guidelines for professional managementof the suicidal patient with cancer. Conclusion: The need to carry out research on the topic ofsuicide in cancer patients was established.
Resumo:
Este trabajo recopila literatura académica relevante sobre estrategias de entrada y metodologías para la toma de decisión sobre la contratación de servicios de Outsourcing para el caso de empresas que planean expandirse hacia mercados extranjeros. La manera en que una empresa planifica su entrada a un mercado extranjero, y realiza la consideración y evaluación de información relevante y el diseño de la estrategia, determina el éxito o no de la misma. De otro lado, las metodologías consideradas se concentran en el nivel estratégico de la pirámide organizacional. Se parte de métodos simples para llegar a aquellos basados en la Teoría de Decisión Multicriterio, tanto individuales como híbridos. Finalmente, se presenta la Dinámica de Sistemas como herramienta valiosa en el proceso, por cuanto puede combinarse con métodos multicriterio.
Resumo:
Las tecnologías de la información han empezado a ser un factor importante a tener en cuenta en cada uno de los procesos que se llevan a cabo en la cadena de suministro. Su implementación y correcto uso otorgan a las empresas ventajas que favorecen el desempeño operacional a lo largo de la cadena. El desarrollo y aplicación de software han contribuido a la integración de los diferentes miembros de la cadena, de tal forma que desde los proveedores hasta el cliente final, perciben beneficios en las variables de desempeño operacional y nivel de satisfacción respectivamente. Por otra parte es importante considerar que su implementación no siempre presenta resultados positivos, por el contrario dicho proceso de implementación puede verse afectado seriamente por barreras que impiden maximizar los beneficios que otorgan las TIC.
Resumo:
Con la creciente popularidad de las soluciones de IT como factor clave para aumentar la competitividad y la creación de valor para las empresas, la necesidad de invertir en proyectos de IT se incrementa considerablemente. La limitación de los recursos como un obstáculo para invertir ha obligado a las empresas a buscar metodologías para seleccionar y priorizar proyectos, asegurándose de que las decisiones que se toman son aquellas que van alineadas con las estrategias corporativas para asegurar la creación de valor y la maximización de los beneficios. Esta tesis proporciona los fundamentos para la implementación del Portafolio de dirección de Proyectos de IT (IT PPM) como una metodología eficaz para la gestión de proyectos basados en IT, y una herramienta para proporcionar criterios claros para los directores ejecutivos para la toma de decisiones. El documento proporciona la información acerca de cómo implementar el IT PPM en siete pasos, el análisis de los procesos y las funciones necesarias para su ejecución exitosa. Además, proporciona diferentes métodos y criterios para la selección y priorización de proyectos. Después de la parte teórica donde se describe el IT PPM, la tesis aporta un análisis del estudio de caso de una empresa farmacéutica. La empresa ya cuenta con un departamento de gestión de proyectos, pero se encontró la necesidad de implementar el IT PPM debido a su amplia cobertura de procesos End-to-End en Proyectos de IT, y la manera de asegurar la maximización de los beneficios. Con la investigación teórica y el análisis del estudio de caso, la tesis concluye con una definición práctica de un modelo aproximado IT PPM como una recomendación para su implementación en el Departamento de Gestión de Proyectos.
Resumo:
El objetivo de este trabajo es hacer un estudio sobre la cadena de suministros en organizaciones empresariales desde la Dinámica de Sistemas y como esta puede aportar al desempeño y el control de las cadenas de suministros. Se buscará Abordar el cocimiento sobre tres perspectivas de Supply Chain y su relación con la dinámica de sistemas. También se buscará identificar los tipos de integración en las actividades de la gestión en la cadena de suministros y sus horizontes de planeación. Por último, se pretende analizar las aplicaciones de Supply Chain Management que se han basado en el uso de la metodología de dinámica de sistemas. Para esto, la investigación empezará por definir la problemática alrededor de unir estas dos áreas y definirá el marco teórico que fundan estas dos disciplinas. Luego se abordará la metodología usada por la Dinámica de Sistemas y los diferentes aspectos de la cadena de suministros. Se Ahondará en el acercamiento de las dos disciplinas y como convergen ayudando la SD a la SCM (Supply Chain Management). En este punto también se describirán los trabajos en los diferentes enfoques que se han hecho a partir de uso de la dinámica de sistemas. Por último, presentaremos las correspondientes conclusiones y comentarios acerca de este campo de investigación y su pertinencia en el campo de la Supply Chain. Esta investigación abarca dos grandes corrientes de pensamiento, una sistémica, a través de la metodología de dinámica de sistemas y la otra, lógico analítica la cual es usada en Supply Chain. Se realizó una revisión de la literatura sobre las aplicaciones de dinámica de sistemas (SD) en el área de Supply Chain, sus puntos en común y se documentaron importantes empleos de esta metodología que se han hecho en la gestión de la cadena de suministros.