805 resultados para Knowledge Discovery in Databases
Resumo:
Australia is unique as a populated continent in that canine rabies is exotic, with only one likely incursion in 1867. This is despite the presence of a widespread free-ranging dog population, which includes the naturalized dingo, feral domestic dogs and dingo-dog cross-breeds. To Australia's immediate north, rabies has recently spread within the Indonesian archipelago, with outbreaks occurring in historically free islands to the east including Bali, Flores, Ambon and the Tanimbar Islands. Australia depends on strict quarantine protocols to prevent importation of a rabid animal, but the risk of illegal animal movements by fishing and recreational vessels circumventing quarantine remains. Predicting where rabies will enter Australia is important, but understanding dog population dynamics and interactions, including contact rates in and around human populations, is essential for rabies preparedness. The interactions among and between Australia's large populations of wild, free-roaming and restrained domestic dogs require quantification for rabies incursions to be detected and controlled. The imminent risk of rabies breaching Australian borders makes the development of disease spread models that will assist in the deployment of cost-effective surveillance, improve preventive strategies and guide disease management protocols vitally important. Here, we critically review Australia's preparedness for rabies, discuss prevailing assumptions and models, identify knowledge deficits in free-roaming dog ecology relating to rabies maintenance and speculate on the likely consequences of endemic rabies for Australia.
Resumo:
Community-based participatory research necessitates that community members act as partners in decision making and mutual learning and discovery. In the same light, for programs/issues involving youth, youth should be partners in knowledge sharing and evaluation (Checkoway & Richards-Schuster, 2004). This study is a youth-focused empowerment evaluation for the Successful Youth program. Successful Youth is a multi-component youth development after-school program for Latino middle school youth, created with the goal of reducing teen pregnancy. An empowerment evaluation is collaborative and participatory (Balcazar and Harper 2003). The three steps of an empowerment evaluation are: (1) defining mission, (2) taking stock, and (3) planning for the future (Fetterman 2001).^ In a program where youth are developing leadership skills, making choices, and learning how to self reflect and evaluate, the empowerment evaluation could not be more aligned with promoting and enhancing these skills. In addition, an empowerment evaluation is designed to "foster improvement and self-determination" and "build capacity" (Fetterman 2001). Four empowerment groups were conducted with approximately 6-9 Latino 7th grade students per group. All participants were enrolled in the Successful Youth program. Results indicate points where students' perceptions of the program were aligned with the program's mission and where gaps were identified. Students offered recommendations for program improvements. Additionally, students enjoyed expressing their feelings about the program and appreciated that their opinions were valued. Youth recommendations will be brought to program staff; and, where possible, gaps will be addressed. Empowerment evaluations with youth will continue during the duration of the program so that youth involvement and input remains integral in the evaluation and to ascertain whether the program's goals are being met. ^
Resumo:
Scholars agree that governance of the public environment entails cooperation between science, policy and society. This requires the active role of public managers as catalysts of knowledge co-production, addressing participatory arenas in relation to knowledge integration and social learning. This paper deals with the question of whether public managers acknowledge and take on this task. A survey accessing Directors of Environmental Offices (EOs) of 64 municipalities was carried out in parallel for two regions - Tuscany (Italy) and Porto Alegre Metropolitan Region (Brazil). The survey data were analysed using the multiple correspondence method. Results showed that, regarding policy practices, EOs do not play the role of knowledge co-production catalysts, since when making environmental decisions they only use technical knowledge. We conclude that there is a gap between theory and practice, and identify some factors that may hinder local environmental managers in acting as catalyst of knowledge co-production, raising a further question for future research.
Resumo:
Since its discovery in 1974 (Klitgord and Mudie, 1974), the Galapagos mounds hydrothermal field has received much attention. Sediment samples were taken during Leg 54 of the Deep Sea Drilling Project (DSDP) and by other expeditions to the area (e.g., Corliss et al., 1978). While a hydrothermal origin for the mounds sediments has been generally accepted, several different theories of origin for the mounds themselves have been proposed (e.g., Corliss et al., 1978; Natland et al., 1979; Williams et al., 1979). One of the aims of DSDP Leg 70 was to return to the mounds field and, using the new hydraulic piston cor er described elsewhere in this volume, to obtain more complete recovery of mounds sediments than had previously been possible. It was our hope that this would help in our understanding of the nature and origin of these deposits. In this chapter, we describe the results of chemical analysis of over 250 sediment samples taken during the course of Leg 70.
Resumo:
This paper develops a Capability Matrix for analyzing capabilities of developing country firms that participate in global and national value chains. This is a generic framework to capture firm-level knowledge accumulation in the context of global and local industrial constellations, by integrating key elements of the global value chain (GVC) and technological capabilities (TC) approaches. The framework can visually portray characteristics of firms’ capabilities, and highlight a relatively overlooked factor in the GVC approach: local firms’ endogenous learning efforts in varieties of relationship with lead firms.
Resumo:
This work deals with quality level prediction in concrete structures through the helpful assistance of an expert system wich is able to apply reasoning to this field of structural engineering. Evidences, hypotheses and factors related to this human knowledge field have been codified into a Knowledge Base in terms of probabilities for the presence of either hypotheses or evidences,and conditional presence of both. Human experts in structural engineering and safety of structures gave their invaluable knowledge and assistance necessary when constructing the "computer knowledge body".
Resumo:
In this position paper, we claim that the need for time consuming data preparation and result interpretation tasks in knowledge discovery, as well as for costly expert consultation and consensus building activities required for ontology building can be reduced through exploiting the interplay of data mining and ontology engineering. The aim is to obtain in a semi-automatic way new knowledge from distributed data sources that can be used for inference and reasoning, as well as to guide the extraction of further knowledge from these data sources. The proposed approach is based on the creation of a novel knowledge discovery method relying on the combination, through an iterative ?feedbackloop?, of (a) data mining techniques to make emerge implicit models from data and (b) pattern-based ontology engineering to capture these models in reusable, conceptual and inferable artefacts.
Resumo:
Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.
Resumo:
La minería de datos es un campo de las ciencias de la computación referido al proceso que intenta descubrir patrones en grandes volúmenes de datos. La minería de datos busca generar información similar a la que podría producir un experto humano. Además es el proceso de descubrir conocimientos interesantes, como patrones, asociaciones, cambios, anomalías y estructuras significativas a partir de grandes cantidades de datos almacenadas en bases de datos, data warehouses o cualquier otro medio de almacenamiento de información. El aprendizaje automático o aprendizaje de máquinas es una rama de la Inteligencia artificial cuyo objetivo es desarrollar técnicas que permitan a las computadoras aprender. De forma más concreta, se trata de crear programas capaces de generalizar comportamientos a partir de una información no estructurada suministrada en forma de ejemplos. La minería de datos utiliza métodos de aprendizaje automático para descubrir y enumerar patrones presentes en los datos. En los últimos años se han aplicado las técnicas de clasificación y aprendizaje automático en un número elevado de ámbitos como el sanitario, comercial o de seguridad. Un ejemplo muy actual es la detección de comportamientos y transacciones fraudulentas en bancos. Una aplicación de interés es el uso de las técnicas desarrolladas para la detección de comportamientos fraudulentos en la identificación de usuarios existentes en el interior de entornos inteligentes sin necesidad de realizar un proceso de autenticación. Para comprobar que estas técnicas son efectivas durante la fase de análisis de una determinada solución, es necesario crear una plataforma que de soporte al desarrollo, validación y evaluación de algoritmos de aprendizaje y clasificación en los entornos de aplicación bajo estudio. El proyecto planteado está definido para la creación de una plataforma que permita evaluar algoritmos de aprendizaje automático como mecanismos de identificación en espacios inteligentes. Se estudiarán tanto los algoritmos propios de este tipo de técnicas como las plataformas actuales existentes para definir un conjunto de requisitos específicos de la plataforma a desarrollar. Tras el análisis se desarrollará parcialmente la plataforma. Tras el desarrollo se validará con pruebas de concepto y finalmente se verificará en un entorno de investigación a definir. ABSTRACT. The data mining is a field of the sciences of the computation referred to the process that it tries to discover patterns in big volumes of information. The data mining seeks to generate information similar to the one that a human expert might produce. In addition it is the process of discovering interesting knowledge, as patterns, associations, changes, abnormalities and significant structures from big quantities of information stored in databases, data warehouses or any other way of storage of information. The machine learning is a branch of the artificial Intelligence which aim is to develop technologies that they allow the computers to learn. More specifically, it is a question of creating programs capable of generalizing behaviors from not structured information supplied in the form of examples. The data mining uses methods of machine learning to discover and to enumerate present patterns in the information. In the last years there have been applied classification and machine learning techniques in a high number of areas such as healthcare, commercial or security. A very current example is the detection of behaviors and fraudulent transactions in banks. An application of interest is the use of the techniques developed for the detection of fraudulent behaviors in the identification of existing Users inside intelligent environments without need to realize a process of authentication. To verify these techniques are effective during the phase of analysis of a certain solution, it is necessary to create a platform that support the development, validation and evaluation of algorithms of learning and classification in the environments of application under study. The project proposed is defined for the creation of a platform that allows evaluating algorithms of machine learning as mechanisms of identification in intelligent spaces. There will be studied both the own algorithms of this type of technologies and the current existing platforms to define a set of specific requirements of the platform to develop. After the analysis the platform will develop partially. After the development it will be validated by prove of concept and finally verified in an environment of investigation that would be define.
Resumo:
The risks associated with gestational diabetes (GD) can be reduced with an active treatment able to improve glycemic control. Advances in mobile health can provide new patient-centric models for GD to create personalized health care services, increase patient independence and improve patients’ self-management capabilities, and potentially improve their treatment compliance. In these models, decision-support functions play an essential role. The telemedicine system MobiGuide provides personalized medical decision support for GD patients that is based on computerized clinical guidelines and adapted to a mobile environment. The patient’s access to the system is supported by a smartphone-based application that enhances the efficiency and ease of use of the system. We formalized the GD guideline into a computer-interpretable guideline (CIG). We identified several workflows that provide decision-support functionalities to patients and 4 types of personalized advice to be delivered through a mobile application at home, which is a preliminary step to providing decision-support tools in a telemedicine system: (1) therapy, to help patients to comply with medical prescriptions; (2) monitoring, to help patients to comply with monitoring instructions; (3) clinical assessment, to inform patients about their health conditions; and (4) upcoming events, to deal with patients’ personal context or special events. The whole process to specify patient-oriented decision support functionalities ensures that it is based on the knowledge contained in the GD clinical guideline and thus follows evidence-based recommendations but at the same time is patient-oriented, which could enhance clinical outcomes and patients’ acceptance of the whole system.
Resumo:
Providing descriptions of isolated sensors and sensor networks in natural language, understandable by the general public, is useful to help users find relevant sensors and analyze sensor data. In this paper, we discuss the feasibility of using geographic knowledge from public databases available on the Web (such as OpenStreetMap, Geonames, or DBpedia) to automatically construct such descriptions. We present a general method that uses such information to generate sensor descriptions in natural language. The results of the evaluation of our method in a hydrologic national sensor network showed that this approach is feasible and capable of generating adequate sensor descriptions with a lower development effort compared to other approaches. In the paper we also analyze certain problems that we found in public databases (e.g., heterogeneity, non-standard use of labels, or rigid search methods) and their impact in the generation of sensor descriptions.
Resumo:
We report a unique case of a gene containing three homologous and contiguous repeat sequences, each of which, after excision, cloning, and expression in Escherichia coli, is shown to code for a peptide catalyzing the same reaction as the native protein, Gonyaulax polyedra luciferase (Mr = 137). This enzyme, which catalyzes the light-emitting oxidation of a linear tetrapyrrole (dinoflagellate luciferin), exhibits no sequence similarities to other luciferases in databases. Sequence analysis also reveals an unusual evolutionary feature of this gene: synonymous substitutions are strongly constrained in the central regions of each of the repeated coding sequences.
Resumo:
Monoclonal antibodies raised against axonemal proteins of sea urchin spermatozoa have been used to study regulatory mechanisms involved in flagellar motility. Here, we report that one of these antibodies, monoclonal antibody D-316, has an unusual perturbating effect on the motility of sea urchin sperm models; it does not affect the beat frequency, the amplitude of beating or the percentage of motile sperm models, but instead promotes a marked transformation of the flagellar beating pattern which changes from a two-dimensional to a three-dimensional type of movement. On immunoblots of axonemal proteins separated by SDS-PAGE, D-316 recognized a single polypeptide of 90 kDa. This protein was purified following its extraction by exposure of axonemes to a brief heat treatment at 40°C. The protein copurified and coimmunoprecipitated with proteins of 43 and 34 kDa, suggesting that it exists as a complex in its native form. Using D-316 as a probe, a full-length cDNA clone encoding the 90-kDa protein was obtained from a sea urchin cDNA library. The sequence predicts a highly acidic (pI = 4.0) protein of 552 amino acids with a mass of 62,720 Da (p63). Comparison with protein sequences in databases indicated that the protein is related to radial spoke proteins 4 and 6 (RSP4 and RSP6) of Chlamydomonas reinhardtii, which share 37% and 25% similarity, respectively, with p63. However, the sea urchin protein possesses structural features distinct from RSP4 and RSP6, such as the presence of three major acidic stretches which contains 25, 17, and 12 aspartate and glutamate residues of 34-, 22-, and 14-amino acid long stretches, respectively, that are predicted to form α-helical coiled-coil secondary structures. These results suggest a major role for p63 in the maintenance of a planar form of sperm flagellar beating and provide new tools to study the function of radial spoke heads in more evolved species.
Resumo:
For nearly 200 years since their discovery in 1756, geologists considered the zeolite minerals to occur as fairly large crystals in the vugs and cavities of basalts and other traprock formations. Here, they were prized by mineral collectors, but their small abundance and polymineralic nature defied commercial exploitation. As the synthetic zeolite (molecular sieve) business began to take hold in the late 1950s, huge beds of zeolite-rich sediments, formed by the alteration of volcanic ash (glass) in lake and marine waters, were discovered in the western United States and elsewhere in the world. These beds were found to contain as much as 95% of a single zeolite; they were generally flat-lying and easily mined by surface methods. The properties of these low-cost natural materials mimicked those of many of their synthetic counterparts, and considerable effort has made since that time to develop applications for them based on their unique adsorption, cation-exchange, dehydration–rehydration, and catalytic properties. Natural zeolites (i.e., those found in volcanogenic sedimentary rocks) have been and are being used as building stone, as lightweight aggregate and pozzolans in cements and concretes, as filler in paper, in the take-up of Cs and Sr from nuclear waste and fallout, as soil amendments in agronomy and horticulture, in the removal of ammonia from municipal, industrial, and agricultural waste and drinking waters, as energy exchangers in solar refrigerators, as dietary supplements in animal diets, as consumer deodorizers, in pet litters, in taking up ammonia from animal manures, and as ammonia filters in kidney-dialysis units. From their use in construction during Roman times, to their role as hydroponic (zeoponic) substrate for growing plants on space missions, to their recent success in the healing of cuts and wounds, natural zeolites are now considered to be full-fledged mineral commodities, the use of which promise to expand even more in the future.
Resumo:
Sob as condições presentes de competitividade global, rápido avanço tecnológico e escassez de recursos, a inovação tornou-se uma das abordagens estratégicas mais importantes que uma organização pode explorar. Nesse contexto, a capacidade de inovação da empresa enquanto capacidade de engajar-se na introdução de novos processos, produtos ou ideias na empresa, é reconhecida como uma das principais fontes de crescimento sustentável, efetividade e até mesmo sobrevivência para as organizações. No entanto, apenas algumas empresas compreenderam na prática o que é necessário para inovar com sucesso e a maioria enxerga a inovação como um grande desafio. A realidade não é diferente no caso das empresas brasileiras e em particular das Pequenas e Médias Empresas (PMEs). Estudos indicam que o grupo das PMEs particularmente demonstra em geral um déficit ainda maior na capacidade de inovação. Em resposta ao desafio de inovar, uma ampla literatura emergiu sobre vários aspectos da inovação. Porém, ainda considere-se que há poucos resultados conclusivos ou modelos compreensíveis na pesquisa sobre inovação haja vista a complexidade do tema que trata de um fenômeno multifacetado impulsionado por inúmeros fatores. Além disso, identifica-se um hiato entre o que é conhecido pela literatura geral sobre inovação e a literatura sobre inovação nas PMEs. Tendo em vista a relevância da capacidade de inovação e o lento avanço do seu entendimento no contexto das empresas de pequeno e médio porte cujas dificuldades para inovar ainda podem ser observadas, o presente estudo se propôs identificar os determinantes da capacidade de inovação das PMEs a fim de construir um modelo de alta capacidade de inovação para esse grupo de empresas. O objetivo estabelecido foi abordado por meio de método quantitativo o qual envolveu a aplicação da análise de regressão logística binária para analisar, sob a perspectiva das PMEs, os 15 determinantes da capacidade de inovação identificados na revisão da literatura. Para adotar a técnica de análise de regressão logística, foi realizada a transformação da variável dependente categórica em binária, sendo grupo 0 denominado capacidade de inovação sem destaque e grupo 1 definido como capacidade de inovação alta. Em seguida procedeu-se com a divisão da amostra total em duas subamostras sendo uma para análise contendo 60% das empresas e a outra para validação (holdout) com os 40% dos casos restantes. A adequação geral do modelo foi avaliada por meio das medidas pseudo R2 (McFadden), chi-quadrado (Hosmer e Lemeshow) e da taxa de sucesso (matriz de classificação). Feita essa avaliação e confirmada a adequação do fit geral do modelo, foram analisados os coeficientes das variáveis incluídas no modelo final quanto ao nível de significância, direção e magnitude. Por fim, prosseguiu-se com a validação do modelo logístico final por meio da análise da taxa de sucesso da amostra de validação. Por meio da técnica de análise de regressão logística, verificou-se que 4 variáveis apresentaram correlação positiva e significativa com a capacidade de inovação das PMEs e que, portanto diferenciam as empresas com capacidade de inovação alta das empresas com capacidade de inovação sem destaque. Com base nessa descoberta, foi criado o modelo final de alta capacidade de inovação para as PMEs composto pelos 4 determinantes: base de conhecimento externo (externo), capacidade de gestão de projetos (interno), base de conhecimento interno (interno) e estratégia (interno).