16 resultados para CALIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the oxide ceramics have widely been investigated for their biocompatibility, non-oxide ceramics, such as SiAlON and SiC are yet to be explored in detail. Lack of understanding of the biocompatibility restricts the use of these ceramics in clinical trials. It is hence, essential to carry out proper and thorough study to assess cell adhesion, cytocompatibility and cell viability on the non-oxide ceramics for the potential applications. In this perspective, the present research work reports the cytocompatibility of gas pressure sintered SiAlON monolith and SiAlON-SiC composites with varying amount of SIC, using connective tissue cells (L929) and bone cells (Saos-2). The quantification of cell viability using MTT assay reveals the non-cytotoxic response. The cell viability has been found to be cell type dependent. An attempt has been made to discuss the cytocompatibility of the developed composites in the light of SiC content and type of sinter additives. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

从国内的CSDL-ILL、NSTL和CALIS三大馆际互借系统的概况、服务模式、检索平台以及服务方式的比较,分析了三大系统的优缺点,并从实际出发,提出对国内资源共享的展望.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An elegant way to prepare catalytically active microreactors is by applying a coating of zeolite crystals onto a metal microchannel structure. In this study the hydrothermal formation of ZSM-5 zeolitic coatings on AISI 316 stainless steel plates with a microchannel structure has been investigated at different synthesis mixture compositions. The procedures of coating and thermal treatment have also been optimized. Obtaining a uniform thickness of the coating within 0.5 mm wide microchannels requires a careful control of various synthesis variables. The role of these factors and the problems in the synthesis of these zeolitic coatings are discussed. In general, the synthesis is most sensitive to the H2O/Si ratio as well as to the orientation of the plates with respect to the gravity vector. Ratios of H2O/Si=130 and Si/template=13 were found to be optimal for the formation of a zeolitic film with a thickness of one crystal at a temperature of 130 degreesC and a synthesis time of about 35 h. At such conditions, ZSM-5 crystals were formed with a typical size of 1.5 mu mx1.5 mu mx1.0 mum and a very narrow (within 0.2 mum) crystal size distribution. The prepared samples proved to be active in the selective catalytic reduction (SCR) of NO with ammonia. The activity tests have been carried out in a plate-type microreactor. The microreactor shows no mass transfer limitations and a larger SCR reaction rate is observed in comparison with pelletized Ce-ZSM-5 catalysts; (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

o serviço móvel celular apresentou, nos últimos anos, mudanças significativas em termos de evolução tecnológica e dos produtos e serviços oferecidos ao cliente. O ambiente de prestação do serviço evoluiu a partir de um monopólio estatal para um regime de duopólio regional privado. A introdução da competição trouxe benefícios aos clientes, dentre os quais podem ser citados a redução dos preços de aquisição e das tarifas de uso do serviço e o aumento da oferta de linhas celulares, antes em falta no mercado. A introdução do serviço pré-pago, em 1999, representou um marco na história do Serviço Móvel Celular no Brasil. O produto, adequado para usuários que querem controlar seus gastos com telefonia celular, permitiu o acesso das classes econômicas C e D ao serviço. O pré-pago trouxe vantagens e desvantagens para clientes e prestadores de serviço. Uma de suas limitações, quando lançado, era a impossibilidade de utilização do servIço fora da área de atuação da operadora ("roaming"). O problema técnico que impedia o oferecimento de roammg para chamadas terminadas foi resolvido e as empresas prestadoras adotaram estratégias distintas em relação a isso. Algumas ofereceram o serviço para todos os clientes pré-pago, indistintamente; outras segmentaram o mercado pré-pago, criando produtos novos com a facilidade de roaming e mantendo os produtos já lançados no mercado sem a facilidade de roaming. A Americel, empresa operadora do Serviço Móvel Celular na região Centro Oeste e parte da região Norte, possui dois produtos pré-pago: o primeiro, chamado Legal, não permite o roaming de chamadas terminadas. Foi criado também o Legal Pacas que oferecia, entre outras novas funcionalidades, a possibilidade de roaming de chamadas terminadas. Os dois produtos possuem outras características diferentes e planos de tarifa próprios. O objetivo da pesquisa é avaliar como o cliente pré-pago da Americel percebe a funcionalidade de roaming de chamadas terminadas. A pesquisa pretende avaliar a importância do serviço para o cliente, como ele avalia o serviço prestado pela empresa e como essa funcionalidade é classificada (básica, de desempenho ou de encantamento). A partir da compreensão da percepção do cliente quanto ao roaming, será avaliada a estratégia de marketing de segmentar o mercado pré-pago por meio dessa funcionalidade. VI A metodologia empregada utiliza-se de ferramentas de medição da satisfação dos clientes. A Americel realiza mensalmente uma pesquisa de satisfação com uma amostra de 400 clientes que são entrevistados por telefone. O questionário de pesquisa possui 50 perguntas sobre vários aspectos relacionados à prestação do serviço. Os resultados anteriores dessa pesquisa apontam a cobertura como sendo a característica mais importante do serviço do ponto de vista do cliente e, ao mesmo tempo, uma característica mal avaliada por ele. A pesquisa também demonstra uma confusão do cliente que, quando questionado sobre onde necessita de cobertura, indica regiões onde a empresa não atua e, portanto, não poderia oferecer cobertura, mas roaming. Para uma parte dos clientes, roaming e cobertura são o mesmo atributo. Foram acrescentadas ao questionário padrão perguntas relacionadas ao roaming. As perguntas foram elaboradas para avaliar a freqüência com que os clientes viajam, a importância atribuída ao roaming, a avaliação do serviço de roaming e a avaliação geral da Americel. Os resultados permitem concluir que, apesar do usuário do serviço pré-pago não viajar freqüentemente, ele considera a funcionalidade de roaming como importante ou muito importante. A avaliação do serviço é ruim, o que implica na necessidade de priorizar ações no sentido de melhorar o serviço de roaming para os clientes pré-pago. Isso pode ser feito através da extensão do oferecimento do roaming a todos os clientes pré-pago. O resultado da análise penalidade-recompensa foi de que a funcionalidade de roaming é uma funcionalidade de desempenho do Serviço Móvel Celular. Isso significa que sua ausência causa insatisfação do cliente, enquanto sua presença aumenta a sua satisfação. Sendo assim, mais uma vez é confirmada a necessidade de oferecimento dessa facilidade a toda a base de clientes pré-pago. A pesquisa permite, então, concluir que a estratégia de marketing de segmentar o mercado de clientes pré-pago por meio do uso da facilidade de roaming tem como conseqüência final a insatisfação do cliente, uma vez que, de acordo com sua percepção, o roaming é uma funcionalidade importante do serviço móvel celular, que deve ser oferecido para toda a base de clientes. As características do serviço que devem ser usadas para segmentação devem ser aquelas que, quando ausentes, não causam insatisfação nos clientes, mas, quando presentes, o encantam.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O objetivo deste estudo foi verificar o efeito da seleção das cargas e do modelo utilizado para a determinação da PC no ergômetro de braço. Participaram do estudo oito voluntários do sexo masculino, que praticavam atividade física regularmente e eram aparentemente saudáveis. Os sujeitos realizaram quatro testes com cargas constantes mantidas até a exaustão voluntária no ergômetro de braço UBE 2462-Cybex. As cargas foram individualmente selecionadas para induzir a fadiga entre 1 e 15 minutos. Para cada sujeito, a determinação da PC foi realizada através de dois modelos lineares: potência-1/tempo e trabalho-tempo. em cada um dos modelos, foram utilizadas todas as potências (1), as três maiores (2) e as três menores (3). As PC encontradas no modelo potência-1/tempo e trabalho-tempo para a condição 3 (177,5 + 29,5; 173,9 + 33,3, respectivamente) foram significantemente menores do que as da condição 2 (190,5 + 23,2; 183,4 + 22,3, respectivamente), não existindo diferenças destas com as da condição 1 (184,2 + 25,4; 176,4 + 28,8, respectivamente). As PC determinadas no modelo potência-1/tempo para as condições 1 e 2 foram significantemente maiores do que as determinadas no modelo trabalho-tempo, não existindo diferenças para a condição 3. Pode-se concluir que as cargas selecionadas e o modelo utilizado interferem na determinação da PC encontrada no ergômetro de braço, podendo interferir no tempo de exaustão durante o exercício submáximo realizado em cargas relativas a este índice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a static analysis that infers both upper and lower bounds on the usage that a logic program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A resource in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower-bound) resource usage functions for all predicates in the program. We also present an assertion language which is used to define both such resources and resourcerelated properties that the system can then check based on the results of the analysis. We have performed some preliminary experiments with some concrete resources such as execution steps, bytes sent or received by an application, number of files left open, number of accesses to a datábase, number of calis to a procedure, number of asserts/retracts, etc. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between abstract interpretation and partial evaluation has received considerable attention and (partial) integrations have been proposed starting from both the partial evaluation and abstract interpretation perspectives. In this work we present what we argüe is the first generic algorithm for efñcient and precise integration of abstract interpretation and partial evaluation from an abstract interpretation perspective. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial evaluation of logic programs, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Also, our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of these parameters correspond to existing algorithms for program analysis and specialization. Our approach efficiently computes strictly more precise results than those achievable by each of the individual techniques. The algorithm is one of the key components of CiaoPP, the analysis and specialization system of the Ciao compiler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polyvariant specialization allows generating múltiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can genérate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract múltiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with puré logic programs and a very limited class of builtins. Herein we propose an extensión to traditional characteristic trees which can be used in the presence of calis to external predicates. This includes all builtins, librarles, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-failure analysis aims at inferring that predicate calis in a program will never fail. This type of information has many applications in functional/logic programming. It is essential for determining lower bounds on the computational cost of calis, useful in the context of program parallelization, instrumental in partial evaluation and other program transformations, and has also been used in query optimization. In this paper, we re-cast the non-failure analysis proposed by Debray et al. as an abstract interpretation, which not only allows to investígate it from a standard and well understood theoretical framework, but has also several practical advantages. It allows us to incorpórate non-failure analysis into a standard, generic abstract interpretation engine. The analysis thus benefits from the fixpoint propagation algorithm, which leads to improved information propagation. Also, the analysis takes advantage of the multi-variance of the generic engine, so that it is now able to infer sepárate non-failure information for different cali patterns. Moreover, the implementation is simpler, and allows to perform non-failure and covering analyses alongside other analyses, such as those for modes and types, in the same framework. Finally, besides the precisión improvements and the additional simplicity, our implementation (in the Ciao/CiaoPP multiparadigm programming system) also shows better efRciency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional logic programming languages, such as Prolog, use a fixed left-to-right atom scheduling rule. Recent logic programming languages, however, usually provide more flexible scheduling in which computation generally proceeds leftto- right but in which some calis are dynamically "delayed" until their arguments are sufRciently instantiated to allow the cali to run efficiently. Such dynamic scheduling has a significant cost. We give a framework for the global analysis of logic programming languages with dynamic scheduling and show that program analysis based on this framework supports optimizations which remove much of the overhead of dynamic scheduling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The control part of the execution of a constraint logic program can be conceptually shown as a search-tree, where nodes correspond to calis, and whose branches represent conjunctions and disjunctions. This tree represents the search space traversed by the program, and has also a direct relationship with the amount of work performed by the program. The nodes of the tree can be used to display information regarding the state and origin of instantiation of the variables involved in each cali. This depiction can also be used for the enumeration process. These are the features implemented in APT, a tool which runs constraint logic programs while depicting a (modified) search-tree, keeping at the same time information about the state of the variables at every moment in the execution. This information can be used to replay the execution at will, both forwards and backwards in time. These views can be abstracted when the size of the execution requires it. The search-tree view is used as a framework onto which constraint-level visualizations (such as those presented in the following chapter) can be attached.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is very often the case that programs require passing, maintaining, and updating some notion of state. Prolog programs often implement such stateful computations by carrying this state in predicate arguments (or, alternatively, in the internal datábase). This often causes code obfuscation, complicates code reuse, introduces dependencies on the data model, and is prone to incorrect propagation of the state information among predicate calis. To partly solve these problems, we introduce contexts as a consistent mechanism for specifying implicit arguments and its threading in clause goals. We propose a notation and an interpretation for contexts, ranging from single goals to complete programs, give an intuitive semantics, and describe a translation into standard Prolog. We also discuss a particular light-weight implementation in Ciao Prolog, and we show the usefulness of our proposals on a series of examples and applications, including code directiy using contexts, DCGs, extended DCGs, logical loops and other custom control structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.