980 resultados para Testing aspect-oriented programs
Resumo:
Facility management (FM), from a service oriented approach, addresses the functions and requirements of different services such as energy management, space planning and security service. Different service requires different information to meet the needs arising from the service. Object-based Building Information Modelling (BIM) is limited to support FM services; though this technology is able to generate 3D models that semantically represent facility’s information dynamically over the lifecycle of a building. This paper presents a semiotics-inspired framework to extend BIM from a service-oriented perspective. The extended BIM, which specifies FM services and required information, will be able to express building service information in the right format for the right purposes. The service oriented approach concerns pragmatic aspect of building’s information beyond semantic level. The pragmatics defines and provides context for utilisation of building’s information. Semiotics theory adopted in this paper is to address pragmatic issues of utilisation of BIM for FM services.
Resumo:
The widespread use of service-oriented architectures (SOAs) and Web services in commercial software requires the adoption of development techniques to ensure the quality of Web services. Testing techniques and tools concern quality and play a critical role in accomplishing quality of SOA based systems. Existing techniques and tools for traditional systems are not appropriate to these new systems, making the development of Web services testing techniques and tools required. This article presents new testing techniques to automatically generate a set of test cases and data for Web services. The techniques presented here explore data perturbation of Web services messages upon data types, integrity and consistency. To support these techniques, a tool (GenAutoWS) was developed and applied to real problems. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Nursing school graduates are under pressure to pass the RN-NCLEX Exam on the first attempt since New York State monitors the results and uses them to evaluate the school’s nursing programs. Since the RN-NCLEX Exam is a standardized test, we sought a method to make our students better test takers. The use of on-line computer adaptive testing has raised our student’s standardized test scores at the end of the nursing course.
Resumo:
While the simulation of flood risks originating from the overtopping of river banks is well covered within continuously evaluated programs to improve flood protection measures, flash flooding is not. Flash floods are triggered by short, local thunderstorm cells with high precipitation intensities. Small catchments have short response times and flow paths and convective thunder cells may result in potential flooding of endangered settlements. Assessing local flooding and pathways of flood requires a detailed hydraulic simulation of the surface runoff. Hydrological models usually do not incorporate surface runoff at this detailedness but rather empirical equations are applied for runoff detention. In return 2D hydrodynamic models usually do not allow distributed rainfall as input nor are any types of soil/surface interaction implemented as in hydrological models. Considering several cases of local flash flooding during the last years the issue emerged for practical reasons but as well as research topics to closing the model gap between distributed rainfall and distributed runoff formation. Therefore, a 2D hydrodynamic model, depth-averaged flow equations using the finite volume discretization, was extended to accept direct rainfall enabling to simulate the associated runoff formation. The model itself is used as numerical engine, rainfall is introduced via the modification of waterlevels at fixed time intervals. The paper not only deals with the general application of the software, but intends to test the numerical stability and reliability of simulation results. The performed tests are made using different artificial as well as measured rainfall series as input. Key parameters of the simulation such as losses, roughness or time intervals for water level manipulations are tested regarding their impact on the stability.
Resumo:
Programas de saúde e bem-estar têm sido adotados por empresas como forma de melhorar a saúde de empregados, e muitos estudos descrevem retornos econômicos positivos sobre os investimentos envolvidos. Entretanto, estudos mais recentes com metodologia melhor têm demonstrado retornos menores. O objetivo deste estudo foi investigar se características de programas de saúde e bem-estar agem como preditores de custos de internação hospitalar (em Reais correntes) e da proporção de funcionários que têm licença médica, entre Abril de 2014 e Maio de 2015, em uma amostra não-aleatória de empresas no Brasil, através de parceria com uma empresa gestora de ‘big data’ para saúde. Um questionário sobre características de programas de saúde no ambiente de trabalho foi respondida por seis grandes empresas brasileiras. Dados retirados destes seis questionários (presença e idade de programa de saúde, suas características – inclusão de atividades de screening, educação sobre saúde, ligação com outros programas da empresa, integração do programa à estrutura da empresa, e ambientes de trabalho voltado para a saúde – e a adoção de incentivos financeiros para aderência de funcionários ao programa), bem como dados individuais de idade, gênero e categoria de plano de saúde de cada empregado , foram usados para construir um banco de dados com mais de 76.000 indivíduos. Através de um modelo de regressão múltipla e seleção ‘stepwise’ de variáveis, a idade do empregado foi positivamente associada e a idade do programa de saúde e a categoria ‘premium’ de plano de saúde do funcionário foram negativamente associadas aos custos de internação hospitalar (como esperado). Inesperadamente, a inclusão de programas de screening e iniciativas de educação de saúde nos programas de saúde e bem-estar nas empresas foram identificados como preditores positivos significativos para custos de admissão hospitalar. Para evitar a inclusão errônea de licenças-maternidade, apenas os dados de licença médica de pacientes do sexo masculino foram analisados (dados disponíveis apenas para duas entre as companhias incluídas, com um total de 18.957 pacientes do sexo masculino). Analisando estes dados através de um teste Z para comparação de proporções, a empresa com programa de saúde que inclui atividades voltadas a cessação de hábitos ruins (como tabagismo e etilismo), controle de diabetes e hipertensão, e que adota incentivos financeiros para a aderência de funcionários ao programa tem menor proporção de empregados com licençca médica no período analisado, quando comparada com a outra empresa que não tem estas características (também conforme esperado). Entretanto, a companhia com menor proporção de funcionários com licença médica também foi aquela que adota programa de screening entre as atividades de seu programa de saúde. Potenciais fontes de ameaça à validade interna e externa destes resultados são discutidas, bem como possíveis explicações para a associação entre programas de screening e educação médica a piores indicadores de saúde nesta amostra de companhias são discutidas. Novos estudos com melhor desenho, com amostras maiores e randômicas são necessários para validar estes resultados e possivelmente melhorar a validade interna e externa destes resultados.
Resumo:
The following paper was conducted with the support of several entrepreneurs and startups from Brazil. The aim of the research was to find out which impact the Business Model Canvas, further abbreviated as BMC, has on technology-oriented startups in Brazil. The first step of the study was identify some general concepts of entrepreneurship, as well as the conditions and environment of the country. Afterwards, it was focused on defining and comparing different business model tools and concepts to the BMC. After the literature review and meeting with several professionals in the area of entrepreneurship and startups, a questionnaire was formulated in order to conduct the qualitative study and identify the main impact of the tool. The questionnaire was answered by ten startups. In order to check the validity and credibility of the research outcomes, theory and investigator triangulation was used. As a result, the usage of the BMC could be evaluated by obtaining the outcomes and the theory, which showed that Brazilian tech startups are using Osterwalder’s model for the reason of idea creation and testing, validating and pivoting their business model. Interestingly, the research revealed that the entrepreneurs are using the tool often not in the traditional way of printing it, but rather applying it as a thinking approach. Besides, the entrepreneurs are focusing mostly on developing a strong Value Proposition, Customer Segment and sustainable Revenue Streams, while afterwards the remaining building blocks are built. Moreover, the research showed that the startups are using also other concepts, such as the Customer Development Process or Build-Measure-Learn Feedback Loop. These methodologies are often applied together with the BMC and helps to identify the most sustainable components of the business idea. Keywords: Business
Resumo:
PLCs (acronym for Programmable Logic Controllers) perform control operations, receiving information from the environment, processing it and modifying this same environment according to the results produced. They are commonly used in industry in several applications, from mass transport to petroleum industry. As the complexity of these applications increase, and as various are safety critical, a necessity for ensuring that they are reliable arouses. Testing and simulation are the de-facto methods used in the industry to do so, but they can leave flaws undiscovered. Formal methods can provide more confidence in an application s safety, once they permit their mathematical verification. We make use of the B Method, which has been successfully applied in the formal verification of industrial systems, is supported by several tools and can handle decomposition, refinement, and verification of correctness according to the specification. The method we developed and present in this work automatically generates B models from PLC programs and verify them in terms of safety constraints, manually derived from the system requirements. The scope of our method is the PLC programming languages presented in the IEC 61131-3 standard, although we are also able to verify programs not fully compliant with the standard. Our approach aims to ease the integration of formal methods in the industry through the abbreviation of the effort to perform formal verification in PLCs
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Validation of parentage and horse breed registries through DNA typing relies on estimates of random match probabilities with DNA profiles generated from multiple polymorphic loci. Of the twenty-seven microsatellite loci recommended by the International Society for Animal Genetics for parentage testing in Thoroughbred horses, eleven are located on five chromosomes. An important aspect in determining combined exclusion probabilities is the ascertainment of the genetic linkage status of syntenic markers, which may affect reliable use of the product rule in estimating random match probabilities. In principle, linked markers can be in gametic phase disequilibrium (GD). We aimed at determining the extent, by frequency and strength, of GD between the HTG4 and HMS3 multiallelic loci, syntenic on chromosome 9. We typed the qualified offspring (n (1) = 27; n (2) = 14) of two Quarter Bred stallions (registered by the Brazilian Association of Quarter Horse Breeders) and 121 unrelated horses from the same breed. In the 41 informative meioses analyzed, the frequency of recombination between the HTG4 and HMS3 loci was 0.27. Consistent with genetic map distances, this recombination rate does not fit to the theoretical distribution for independently segregated markers. We estimated sign-based D' coefficients as a measure of GD, and showed that the HTG4 and HMS3 loci are in significant, yet partial and weak, disequilibrium, with two allele pairs involved (HTG4*M/HMS3*P, D'(+) = 0.6274; and HTG4*K/HMS3*P, D'(-) = -0.6096). These results warn against the inadequate inclusion of genetically linked markers in the calculation of combined power of discrimination for Thoroughbred parentage validation.
Resumo:
We sought to evaluate the performance of diagnostic tools to establish an affordable setting for early detection of cervical cancer in developing countries. We compared the performance of different screening tests and their feasibility in a cohort of over 12,000 women: conventional Pap smear, liquid-based cytology, visual inspection with acetic acid (VIA), visual inspection with Iodine solution (VILI), cervicography, screening colposcopy, and high-risk human papillomavirus (HPV) testing (HR-HPV) collected by physician and by self-sampling. HR-HPV assay collected by the physician has the highest sensitivity (80 %), but high unnecessary referrals to colposcopy (15.1 %). HR-HPV test in self-sampling had a markedly lower (57.1 %) sensitivity. VIA, VILI, and cervicography had a poor sensitivity (47.4, 55, and 28.6 %, respectively). Colposcopy presented with sensitivity of 100 % in detecting CIN2+, but the lowest specificity (66.9 %). Co-testing with VIA and VILI Pap test increased the sensitivity of stand-alone Pap test from 71.6 to 87.1 % and 71.6 to 95 %, respectively, but with high number of unnecessary colposcopies. Co-testing with HR-HPV importantly increased the sensitivity of Pap test (to 86 %), but with high number of unnecessary colposcopies (17.5 %). Molecular tests adjunct to Pap test seems a realistic option to improve the detection of high-grade lesions in population-based screening programs.
Resumo:
Public health strategies to reduce cardiovascular morbidity and mortality should focus on global cardiometabolic risk reduction. The efficacy of lifestyle changes to prevent type 2 diabetes have been demonstrated, but low-cost interventions to reduce cardiometabolic risk in Latin-America have been rarely reported. Our group developed 2 programs to promote health of high-risk individuals attending a primary care center in Brazil. This study compared the effects of two 9-month lifestyle interventions, one based on medical consultations (traditional) and another with 13 multi-professional group sessions in addition to the medical consultations (intensive) on cardiometabolic parameters. Adults were eligible if they had pre-diabetes (according to the American Diabetes Association) and/or metabolic syndrome (International Diabetes Federation criteria for Latin-America). Data were expressed as means and standard deviations or percentages and compared between groups or testing visits. A p-value < 0.05 was considered significant. Results: 180 individuals agreed to participate (35.0% men, mean age 54.7 ± 12.3 years, 86.1% overweight or obese). 83 were allocated to the traditional and 97 to the intensive program. Both interventions reduced body mass index, waist circumference and tumor necrosis factor-α. Only intensive program reduced 2-hour plasma glucose and blood pressure and increased adiponectin values, but HDL-cholesterol increased only in the traditional. Also, responses to programs were better in intensive compared to traditional program in terms of blood pressure and adiponectin improvements. No new case of diabetes in intensive but 3 cases and one myocardial infarction in traditional program were detected. Both programs induced metabolic improvement in the short-term, but if better results in the intensive are due to higher awareness about risk and self-motivation deserves further investigation. In conclusion, these low-cost interventions are able to minimize cardiometabolic risk factors involved in the progression to type 2 diabetes and/or cardiovascular disease.
Resumo:
[EN]This research investigates the ways of implementing dual-language programs and the schools’ internal procedures of evaluating them. Previous studies have examined the effectiveness of bilingual programs (Genovesee et al. 2005; Howard et al. 2005; Krashen 2004). However, there is little still known about schools’ procedures that systematize the organizational aspect of such programs. The Mixed Methods Research (MMR) approach was applied in this study to analyze data collected through questionnaires, interviews, and case studies.
Resumo:
Process algebraic architectural description languages provide a formal means for modeling software systems and assessing their properties. In order to bridge the gap between system modeling and system im- plementation, in this thesis an approach is proposed for automatically generating multithreaded object-oriented code from process algebraic architectural descriptions, in a way that preserves – under certain assumptions – the properties proved at the architectural level. The approach is divided into three phases, which are illustrated by means of a running example based on an audio processing system. First, we develop an architecture-driven technique for thread coordination management, which is completely automated through a suitable package. Second, we address the translation of the algebraically-specified behavior of the individual software units into thread templates, which will have to be filled in by the software developer according to certain guidelines. Third, we discuss performance issues related to the suitability of synthesizing monitors rather than threads from software unit descriptions that satisfy specific constraints. In addition to the running example, we present two case studies about a video animation repainting system and the implementation of a leader election algorithm, in order to summarize the whole approach. The outcome of this thesis is the implementation of the proposed approach in a translator called PADL2Java and its integration in the architecture-centric verification tool TwoTowers.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
La ricerca pone al suo centro lo studio dell'opera architettonica di Emil Steffann (1899-1968) la cui produzione realizzata consta, nel breve arco temporale che va dal 1950 al 1968, del ragguardevole numero di trentanove chiese, rappresentando un caso emblematico di progettazione e costruzione di edifici per il culto cristiano in grado di raffigurarne concretamente i principi fondativi liturgici, estetici e morfologici. L'architettura di Steffann, profondamente ispirata dallo spirito religioso, legata a figure primigenie che plasmano lo stare-insieme della comunità nella qualità corporea della materia, dove la presenza liturgica e monumentale si esprime nel silenzio e nella disponibilità di uno spazio circoscritto dai muri e direzionato dalla luce, concorre a definire nell'oggettivo amore per il vero la percezione estetico-teologica e la poetica formativa che connaturano, a nostro parere, progetto e segno della chiesa. Il testo concretizza il primo studio monografico completo di questo corpus architettonico e si basa sulla ricognizione diretta delle opere di Steffann; ne è derivata una narrazione non conseguente a un ordine cronologico o di presupposta importanza degli edifici, bensì che ricerca ed evidenzia corrispondenze tra nodi di una rete ideativa la quale, con diversi gradi di finitezza, in punti non sempre omogenei del tempo e dello spazio, denota un'esperienza autentica del comporre e del costruire. Il racconto individua gli oggetti architettonici, ne discute la consistenza aprendosi a riferimenti altri (in particolare il pensiero ecclesiologico-liturgico di Romano Guardini e quello estetico-teologico di Hans Urs von Balthasar) in grado di illuminarne la genesi e la manifestazione, li lega infine in sequenze analogiche. Una serie di tavole fotografiche originali, parte ineludibile e integrante della ricerca, testimonia dello stato attuale dei luoghi, connotando ulteriormente l'aspetto info-rappresentativo della loro composizione architettonica. In chiusura, la sintesi architetturale vuole essere uno strumento di verifica e progetto, quindi di trasposizione futura, correlato all'elaborazione documentaria.