904 resultados para 100602 Input Output and Data Devices
Resumo:
Advancements in information technology have made it possible for organizations to gather and store vast amounts of data of their customers. Information stored in databases can be highly valuable for organizations. However, analyzing large databases has proven to be difficult in practice. For companies in the retail industry, customer intelligence can be used to identify profitable customers, their characteristics, and behavior. By clustering customers into homogeneous groups, companies can more effectively manage their customer base and target profitable customer segments. This thesis will study the use of the self-organizing map (SOM) as a method for analyzing large customer datasets, clustering customers, and discovering information about customer behavior. Aim of the thesis is to find out whether the SOM could be a practical tool for retail companies to analyze their customer data.
Resumo:
In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
The COntext INterchange (COIN) strategy is an approach to solving the problem of interoperability of semantically heterogeneous data sources through context mediation. COIN has used its own notation and syntax for representing ontologies. More recently, the OWL Web Ontology Language is becoming established as the W3C recommended ontology language. We propose the use of the COIN strategy to solve context disparity and ontology interoperability problems in the emerging Semantic Web – both at the ontology level and at the data level. In conjunction with this, we propose a version of the COIN ontology model that uses OWL and the emerging rules interchange language, RuleML.
Resumo:
A look at the Southampton Nanfabrication Centre where electro-photonic research is carried out and the AMD company's industrial processes for creating commercial quantities of silicon computing devices.
Resumo:
Resources from the Singapore Summer School 2014 hosted by NUS. ws-summerschool.comp.nus.edu.sg
Resumo:
Speaker: Dr Kieron O'Hara Organiser: Time: 04/02/2015 11:00-11:45 Location: B32/3077 Abstract In order to reap the potential societal benefits of big and broad data, it is essential to share and link personal data. However, privacy and data protection considerations mean that, to be shared, personal data must be anonymised, so that the data subject cannot be identified from the data. Anonymisation is therefore a vital tool for data sharing, but deanonymisation, or reidentification, is always possible given sufficient auxiliary information (and as the amount of data grows, both in terms of creation, and in terms of availability in the public domain, the probability of finding such auxiliary information grows). This creates issues for the management of anonymisation, which are exacerbated not only by uncertainties about the future, but also by misunderstandings about the process(es) of anonymisation. This talk discusses these issues in relation to privacy, risk management and security, reports on recent theoretical tools created by the UKAN network of statistics professionals (on which the author is one of the leads), and asks how long anonymisation can remain a useful tool, and what might replace it.
Resumo:
Resumen tomado parcialmente de la propia publicación
Resumo:
Trata de conocer hasta qué punto la valoración académica de un individuo incide en la vida posterior del mismo, es decir, cuál puede ser el rendimiento de una persona en función del proceso educativo que haya seguido. Alumnos de cuarto de Bachiller, de edad comprendida entre 13 y 14 años que realizaron sus estudios en Cheste durante los cursos académicos de 1970-1971 y 1971-1972, con el Plan vigente de 1967. En total son 681 alumnos de los cuales el 53,86 por ciento pertenecen a zonas rurales y el 46,14 por ciento a zona urbana. En primer lugar trata teoriza sobre los estudios realizados de caracter input-output, tanto en el campo de la psicología como de la educación siendo consciente de esta forma de los problemas que los mismos dan y a los que deberá enfrentarse, posteriormente plantea el estudio realizando la investigación, seleccionando las variables que pretende estudiar, recogiendo datos , codificándolos, escogiendo una muestra de población y aplicando dichas variables para poder llegar a las conclusiones que finalmente ofrece el estudio y abriendo puertas a otros de las mismas carcterísticas. Encuesta, cuestionario, entrevista personal, test (AMPE). Variables input, dentro de las cuales se encuentran las variables estado (datos psicológicos), y las variables de flujo (rendimiento académico). Como variables psicológicas se consideran la actitud para el estudio, personalidad paranoide versus control, capacidad intelectual, extraversión. Como variables de rendimiento se estudia el rendimiento en cuarto de bachiller, el rendimiento en tercero de bachiller y destrezas físico-deportivas. Como variables de salida output se considera la situación laboral ocupacional, situación personal, situación económica y situación social. Análisis factorial, regresión múltiple, correlación de Pearson, análisis imput-output. Los resultados se encuentran implícitos en las siguientes conclusiones: 1) Los componenenes académicos influyen poco en la vida posterior del sujeto, si bien marcan o detectan en algún sentido su situación social convivencial sobre los demás aspectos. Ello nos induce a pensar que en el aula se califican a la vez que conocimientos, los comportamientos sociales. 2)Los componentes psicológicos influyen más en la situación personal entre los outpurs considerados 3)En el análisis input-output hay que destacar que los outputs no se explican en su totalidad con los inputs que hemos estudiado, lo que destaca la introduccion de muchas otras variables en la consideración de los aspectos tratados y éstas en gran cantidad.
Resumo:
PowerPoint presentation on electrosurgery and surgical stapling
Resumo:
The object of analysis in the present text is the issue of operational control and data retention in Poland. The analysis of this issue follows from a critical stance taken by NGOs and state institutions on the scope of operational control wielded by the Polish police and special services – it concerns, in particular, the employment of “itemized phone bills and the so-called phone tapping.” Besides the quantitative analysis of operational control and the scope of data retention, the text features the conclusions of the Human Rights Defender referred to the Constitutional Tribunal in 2011. It must be noted that the main problems concerned with the employment of operational control and data retention are caused by: (1) a lack of specification of technical means which can be used by individual services; (2) a lack of specification of what kind of information and evidence is in question; (3) an open catalogue of information and evidence which can be clandestinely acquired in an operational mode. Furthermore, with regard to the access granted to teleinformation data by the Telecommunications Act, attention should be drawn to a wide array of data submitted to particular services. Also, the text draws on the so-called open interviews conducted mainly with former police officers with a view to pointing to some non-formal reasons for “phone tapping” in Poland. This comes in the form of a summary.