890 resultados para System verification and analysis
Resumo:
The first study was designed to assess whether the involvement of the peripheral nervous system (PNS) belongs to the phenotypic spectrum of sporadic Creutzfeldt-Jakob disease (sCJD). To this aim, we reviewed medical records of 117 sCJDVV2, 65 sCJDMV2K, and 121 sCJDMM(V)1 subjects for symptoms/signs and neurophysiological data. We looked for the presence of PrPSc in postmortem PNS samples from 14 subjects by western blotting and real-time quaking-induced conversion (RT-QuIC) assay. Seventy-five (41.2%) VV2-MV2K patients, but only 11 (9.1%) MM(V)1, had symptoms/signs suggestive of PNS involvement and neuropathy was documented in half of the VV2-MV2K patients tested. RT-QuIC was positive in all PNS samples, whereas western blotting detected PrPSc in the sciatic nerve in only one VV2 and one MV2K. These results support the conclusion that peripheral neuropathy, likely related to PrPSc deposition, belongs to the phenotypic spectrum of sCJDMV2K and VV2, the two variants linked to the V2 strain. The second study aimed to characterize the genetic/molecular determinants of phenotypic variability in genetic CJD (gCJD). To this purpose, we compared 157 cases of gCJD to 300 of sCJD. We analyzed: demographic aspects, neurological symptoms/signs, histopathologic features and biochemical characteristics of PrPSc. The results strongly indicated that the clinicopathological phenotypes of gCJD largely overlap with those of sCJD and that the genotype at codon 129 in cis with the mutation (i.e. haplotype) contributes more than the latter to the disease phenotype. Some mutations, however, cause phenotypic variations including haplotype-specific patterns of PrPSc deposition such as the “dense” synaptic pattern (E200K-129M), the intraneuronal dots (E200K-129V), and the linear stripes perpendicular to the surface in the molecular layer of cerebellum (OPRIs-129M). Overall, these results suggest that in gCJD PRNP mutations do not cause the emergence of novel prion strains, but rather confer increased susceptibility to the disease in conjunction with “minor” clinicopathological variations.
Resumo:
Analytics is the technology working with the manipulation of data to produce information able to change the world we live every day. Analytics have been largely used within the last decade to cluster people’s behaviour to predict their preferences of items to buy, music to listen, movies to watch and even electoral preference. The most advanced companies succeded in controlling people’s behaviour using analytics. Despite the evidence of the super-power of analytics, they are rarely applied to the big data collected within supply chain systems (i.e. distribution network, storage systems and production plants). This PhD thesis explores the fourth research paradigm (i.e. the generation of knowledge from data) applied to supply chain system design and operations management. An ontology defining the entities and the metrics of supply chain systems is used to design data structures for data collection in supply chain systems. The consistency of this data is provided by mathematical demonstrations inspired by the factory physics theory. The availability, quantity and quality of the data within these data structures define different decision patterns. Ten decision patterns are identified, and validated on-field, to address ten different class of design and control problems in the field of supply chain systems research.
Resumo:
Nuclear cross sections are the pillars onto which the transport simulation of particles and radiations is built on. Since the nuclear data libraries production chain is extremely complex and made of different steps, it is mandatory to foresee stringent verification and validation procedures to be applied to it. The work here presented has been focused on the development of a new python based software called JADE, whose objective is to give a significant help in increasing the level of automation and standardization of these procedures in order to reduce the time passing between new libraries releases and, at the same time, increasing their quality. After an introduction to nuclear fusion (which is the field where the majority of the V\&V action was concentrated for the time being) and to the simulation of particles and radiations transport, the motivations leading to JADE development are discussed. Subsequently, the code general architecture and the implemented benchmarks (both experimental and computational) are described. After that, the results coming from the major application of JADE during the research years are presented. At last, after a final discussion on the objective reached by JADE, the possible brief, mid and long time developments for the project are discussed.
Resumo:
This research has the objective of analyze the possibilities offered by the use of the graphology in the process of personnel's selection that aim at the completion of current vacancies of job offers. The study is based in the verification of the applicability and forms of use of selection instruments of personal, mainly of those developed with objective of identifying the aptitudes and the characteristics of the individual's personality. Analyzing its applicability, as form of adapting to the reality today, the graphology, besides assisting to the new variables that the global changes impose to the individuals' behavior and his relationship with the companies, appears as alternative and it is occupying, every day, more space in the professional evaluation area. This study offers reasons so that one can consider the graphology analysis as one more instrument to be used in personnel's selection, in agreement with the possibilities and identified limitations, when used for that end. Tends as objective to explain the reasons of the use of the graphology analysis as evaluation tool, the exploratory and explanatory researches were adopted. This explanation will appear, then, of the verification and analysis of the data obtained in the bibliographical and of field researches, which allows like this to base its report, development and applicability.
Resumo:
No presente trabalho, são discutidas questões relacionadas à verificação e à análise do nível de serviço logístico, prestado pelo Estado, na execução de perícias criminais de engenharia civil. Foram considerados fatores como equipamentos e meios de transporte utilizados, qualificação dos profissionais envolvidos, padronização de procedimentos adotados e a emissão de laudos periciais. O objetivo é a obtenção de diretrizes na atividade estudada, através da identificação das possíveis oportunidades de melhoria existentes na gestão desta área da Criminalística, considerando-se os componentes de desempenho logísticos relacionados aos fatores chaves: estoque, transporte, instalações e informação. A estratégia de pesquisa utilizada foi o estudo de caso, com o emprego de relatórios estatísticos e entrevistas semi-estruturadas aos gestores do órgão responsável pela atividade pericial no Pará. Quanto aos resultados obtidos, ao se analisar o conteúdo das entrevistas realizadas, observou-se que as hipóteses de trabalho apresentavam correlação com algumas das diretrizes logísticas elaboradas, tais como o aumento na eficiência do nível de serviço logístico na atividade estudada através da adoção de procedimentos operacionais padronizados.
Resumo:
In any welding process is of utmost importance by welders and responsible qualities of the area understand the process and the variables involved in it, in order to have maximum efficiency in welding both in terms of quality as the final cost , never forgetting, of course, the process conditions which the welder or welding operator shall be submitted. Therefore, we sought to understand the variables relevant to the welding process and develop an EPS (Welding Procedure Specification) as ASME IX for cored wire welding process (FCAW Specification AWS) with shielding gas and automated process for base material ASTM a 131, with 5/16 thick, using a single pass weld, for conditions with pre-and post-heating and the destructive testing for verification and analysis of the resulting weld bead
Resumo:
The methods of designing of information systems for large organizations are considered in the paper. The structural and object-oriented approaches are compared. For the practical realization of the automated dataflow systems the combined method for the system development and analysis is proposed.
Resumo:
The speed with which data has moved from being scarce, expensive and valuable, thus justifying detailed and careful verification and analysis to a situation where the streams of detailed data are almost too large to handle has caused a series of shifts to occur. Legal systems already have severe problems keeping up with, or even in touch with, the rate at which unexpected outcomes flow from information technology. The capacity to harness massive quantities of existing data has driven Big Data applications until recently. Now the data flows in real time are rising swiftly, become more invasive and offer monitoring potential that is eagerly sought by commerce and government alike. The ambiguities as to who own this often quite remarkably intrusive personal data need to be resolved – and rapidly - but are likely to encounter rising resistance from industrial and commercial bodies who see this data flow as ‘theirs’. There have been many changes in ICT that has led to stresses in the resolution of the conflicts between IP exploiters and their customers, but this one is of a different scale due to the wide potential for individual customisation of pricing, identification and the rising commercial value of integrated streams of diverse personal data. A new reconciliation between the parties involved is needed. New business models, and a shift in the current confusions over who owns what data into alignments that are in better accord with the community expectations. After all they are the customers, and the emergence of information monopolies needs to be balanced by appropriate consumer/subject rights. This will be a difficult discussion, but one that is needed to realise the great benefits to all that are clearly available if these issues can be positively resolved. The customers need to make these data flow contestable in some form. These Big data flows are only going to grow and become ever more instructive. A better balance is necessary, For the first time these changes are directly affecting governance of democracies, as the very effective micro targeting tools deployed in recent elections have shown. Yet the data gathered is not available to the subjects. This is not a survivable social model. The Private Data Commons needs our help. Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. This Web extra is the audio part of a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.
Resumo:
As usage metrics continue to attain an increasingly central role in library system assessment and analysis, librarians tasked with system selection, implementation, and support are driven to identify metric approaches that simultaneously require less technical complexity and greater levels of data granularity. Such approaches allow systems librarians to present evidence-based claims of platform usage behaviors while reducing the resources necessary to collect such information, thereby representing a novel approach to real-time user analysis as well as dual benefit in active and preventative cost reduction. As part of the DSpace implementation for the MD SOAR initiative, the Consortial Library Application Support (CLAS) division has begun test implementation of the Google Tag Manager analytic system in an attempt to collect custom analytical dimensions to track author- and university-specific download behaviors. Building on the work of Conrad , CLAS seeks to demonstrate that the GTM approach to custom analytics provides both granular metadata-based usage statistics in an approach that will prove extensible for additional statistical gathering in the future. This poster will discuss the methodology used to develop these custom tag approaches, the benefits of using the GTM model, and the risks and benefits associated with further implementation.
Resumo:
This paper presents the development of a knowledge-based system (KBS) prototype able to design natural gas cogeneration plants, demonstrating new features for this field. The design of such power plants represents a synthesis problem, subject to thermodynamic constraints that include the location and sizing of components. The project was developed in partnership with the major Brazilian gas and oil company, and involved interaction with an external consultant as well as an interdisciplinary team. The paper focuses on validation and lessons learned, concentrating on important aspects such as the generation of alternative configuration schemes, breadth of each scheme description created by the system, and its module to support economic feasibility analysis. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We present a tutorial overview of Ciaopp, the Ciao system preprocessor. Ciao is a public-domain, next-generation logic programming system, which subsumes ISO-Prolog and is specifically designed to a) be highly extensible via librarles and b) support modular program analysis, debugging, and optimization. The latter tasks are performed in an integrated fashion by Ciaopp. Ciaopp uses modular, incremental abstract interpretation to infer properties of program predicates and literals, including types, variable instantiation properties (including modes), non-failure, determinacy, bounds on computational cost, bounds on sizes of terms in the program, etc. Using such analysis information, Ciaopp can find errors at compile-time in programs and/or perform partial verification. Ciaopp checks how programs cali system librarles and also any assertions present in the program or in other modules used by the program. These assertions are also used to genérate documentation automatically. Ciaopp also uses analysis information to perform program transformations and optimizations such as múltiple abstract specialization, parallelization (including granularity control), and optimization of run-time tests for properties which cannot be checked completely at compile-time. We illustrate "hands-on" the use of Ciaopp in all these tasks. By design, Ciaopp is a generic tool, which can be easily tailored to perform these and other tasks for different LP and CLP dialects.