952 resultados para software methodology
Resumo:
Princípios do processo de software leve. Pouca burocracia e adaptação às características dos projetos. Diretrizes básicas de gerência de projetos e de configuaração. Gerência de projeto. Gerência de configuração. Definição das diretrizes básicas e do processo de auditoria. Disseminação de uma linguagem de definição de representações de software. Uso de ferramentas de domínio público. Teste de frequente e cedo. Ações para implantação do processo de software leve. Definição das diretrizes básicas e de auditoria do processo. Identificação das boas praticas da Embrapa Informática Agropecuária. Disseminação do processo de software leve. Trabalhos relacionados.
Resumo:
v. 1. Aspectos de qualidade de produto de software na Embrapa. Visão geral de qualidade. Qualidade de software. Certificação de qualidade de produto de software. NBR 13596 - modelo de qualidade: características e subcaracterísticas. NBR 12119 - pacotes de software - teste e requisitos de qualidade. Qualidade na Embrapa.
Resumo:
A agencia de informação Embrapa disponibiliza na internet informação qualificada e organizada e, muitas vezes, também aquelas geradas pela própria Embrapa. As soluções de software esposta neste trabalho são dirigidas ao gerenciamento dessas informações, que são armazenadas em base de dados centralizada e atualizada via internet por aplicativos deste sistema. O objetivo de apresentar essas soluções é contribuir para o desenvolvimento de sistemas com orientação metodológica similar. Este sistema teve como principal identificação de requisitos as falhas existentes na primeira versão do mesmo, que foi orientada exclusivamente para manipulação de dados formatados em XML. A nova versão traz uma arquitetura baseada nas orientações Java 2 Enterprise Editon (J2EE): modelo em camadas (orientação Model View Controler-MVC), uso de containers e sistema gerenciador de banco de dados. O resultado é um sistema mais robusto em seu todo, além das melhorias de manutenabilidade. Termos para indexação:J2EE, XML, PDOM, Model view controller- MVC, Oracle.
Resumo:
A cultura da cana-de-açúcar vem sofrendo mudanças, de âmbitos tecnológicos e sociais, profundas nesta década, procurando se adaptar às demandas de produção com alta produtividade, competitividade e respeito ao meio ambiente. Apesar de o Brasil ser o maior produtor mundial de cana-de-açúcar, ainda pratica a queima da palha do canavial para facilitar a colheita, o que gera prejuízos econômicos, sociais e ambientais. Sem essa queima (Decreto n.° 42056 do Estado de SP), a cobertura do solo pela palhada irá provocar significativas mudanças no manejo da cultura e na dinâmica do nitrogênio. Dada a complexibilidade do ciclo de nitrogênio no solo, seus vários caminhos de transformação, e as variações climáticas, é difícil a determinação do melhor manejo do nitrogênio em sistemas de cultivo, pois não há análise de solo para apoiar o agricultor no seu manejo. Modelos de Simulação que descrevem as transformações do nitrogênio do solo podem prever valores e direcionar o melhor manejo do nitrogênio, tanto do ponto de vista da produtividade da cana como da qualidade ambiental. Assim, o modelo preliminar proposto na Fase I deste estudo em Relatório Técnico 22, da Embrapa informática Agropecuária, foi, nesta Fase II do projeto, ajustado com valores para solos tropicais e reconstruído no software de Simulação STELLA, agregando-se todo o conhecimento disponível em expressões matemáticas sobre esse assunto. Procedendo-se a simulação numérica em situações usuais, geraram-se como resultados, cenários que permitiram discussões técnicas sobre o melhoria do manejo do fertilizante nitrogenado. Concluiu-se que, apesar da complexa dinâmica do nitrogênio no sistema solo-planta e das dificuldades inerentes à medida de formas disponíveis de N, o modelo ajustado apresentou-se como uma alternativa para pesquisadores, técnicos e produtores no entendimento dos processos que envolvem o nitrogênio no sistema, auxiliando na busca por soluções para o melhor manejo de fertilizantes nitrogenados à cultura da cana-de-açúcar para manutenção de produtividades adequadas.
Resumo:
Contatos interatômicos são definidos no contexto deste trabalho como as forças de atração ou de repulsão existentes entre átomos distintos.
Resumo:
O objetivo deste comunicado é apresentar a implementação JavaTM do software LIVIA (Library for Visual Image Analysis). Trata-se de um módulo de processamento de imagens digitais aplicado à agricultura, desenvolvido na Embrapa Informática Agropecuária (Campinas/SP), sob demanda da Embrapa Meio Ambiente (Jaguariúna/SP).
Resumo:
Este trabalho tem por finalidade apresentar os resultados obtidos no contexto do projeto de pesquisa, cujo objetivo foi definir uma infraestrutura de software para implantação de um portal de integração e interoperabilidade de serviços desenvolvidos pela Embrapa Informática Agropecuária denominado WebAgritec.
Resumo:
Purpose and rationale The purpose of the exploratory research is to provide a deeper understanding of how the work environment enhances or constrains organisational creativity (creativity and innovation) within the context of the advertising sector. The argument for the proposed research is that the contemporary literature is dominated by quantitative research instruments to measure the climate and work environment across many different sectors. The most influential theory within the extant literature is the componential theory of organisational creativity and innovation and is used as an analytical guide (Amabile, 1997; Figure 8) to conduct an ethnographic study within a creative advertising agency based in Scotland. The theory suggests that creative people (skills, expertise and task motivation) are influenced by the work environment in which they operate. This includes challenging work (+), work group supports (+), supervisory encouragement (+), freedom (+), sufficient resources (+), workload pressures (+ or -), organisational encouragement (+) and organisational impediments (-) which is argued enhances (+) or constrains (-) both creativity and innovation. An interpretive research design is conducted to confirm, challenge or extend the componential theory of organisational creativity and innovation (Amabile, 1997; Figure 8) and contribute to knowledge as well as practice. Design/methodology/approach The scholarly activity conducted within the context of the creative industries and advertising sector is in its infancy and research from the alternative paradigm using qualitative methods is limited which may provide new guidelines for this industry sector. As such, an ethnographic case study research design is a suitable methodology to provide a deeper understanding of the subject area and is consistent with a constructivist ontology and an interpretive epistemology. This ontological position is conducive to the researcher’s axiology and values in that meaning is not discovered as an objective truth but socially constructed from multiple realties from social actors. As such, ethnography is the study of people in naturally occurring settings and the creative advertising agency involved in the research is an appropriate purposive sample within an industry that is renowned for its creativity and innovation. Qualitative methods such as participant observation (field notes, meetings, rituals, social events and tracking a client brief), material artefacts (documents, websites, annual reports, emails, scrapbooks and photographic evidence) and focused interviews (informal and formal conversations, six taped and transcribed interviews and use of Survey Monkey) are used to provide a written account of the agency’s work environment. The analytical process of interpreting the ethnographic text is supported by thematic analysis (selective, axial and open coding) through the use of manual analysis and NVivo9 software Findings The findings highlight a complex interaction between the people within the agency and the enhancers and constraints of the work environment in which they operate. This involves the creative work environment (Amabile, 1997; Figure 8) as well as the physical work environment (Cain, 2012; Dul and Ceylan, 2011; Dul et al. 2011) and that of social control and power (Foucault, 1977; Gahan et al. 2007; Knights and Willmott, 2007). As such, the overarching themes to emerge from the data on how the work environment enhances or constrains organisational creativity include creative people (skills, expertise and task motivation), creative process (creative work environment and physical work environment) and creative power (working hours, value of creativity, self-fulfilment and surveillance). Therefore, the findings confirm that creative people interact and are influenced by aspects of the creative work environment outlined by Amabile (1997; Figure 8). However, the results also challenge and extend the theory to include that of the physical work environment and creative power. Originality/value/implications Methodologically, there is no other interpretive research that uses an ethnographic case study approach within the context of the advertising sector to explore and provide a deeper understanding of the subject area. As such, the contribution to knowledge in the form of a new interpretive framework (Figure 16) challenges and extends the existing body of knowledge (Amabile, 1997; Figure 8). Moreover, the contribution to practice includes a flexible set of industry guidelines (Appendix 13) that may be transferrable to other organisational settings.
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
Purpose - The aim of this study was to investigate whether the presence of a whole-face context during facial composite production facilitates construction of facial composite images. Design/Methodology - In Experiment 1, constructors viewed a celebrity face and then developed a facial composite using PRO-fit in one of two conditions: either the full-face was visible while facial features were selected, or only the feature currently being selected was visible. The composites were named by different participants. We then replicated the study using a more forensically-valid procedure: In Experiment 2 non-football fans viewed an image of a premiership footballer and 24 hours later constructed a composite of the face with a trained software operator. The resulting composites were named by football fans. Findings - In both studies we found that presence of the facial context promoted more identifiable facial composite images. Research limitations/implications – Though this study uses current software in an unconventional way, this was necessary to avoid error arising from between-system differences. Practical implications - Results confirm that composite software should have the whole-face context visible to witnesses throughout construction. Though some software systems do this, there remain others that present features in isolation and these findings show that these systems are unlikely to be optimal. Originality/value - This is the first study to demonstrate the importance of a full-face context for the construction of facial composite images. Results are valuable to police forces and developers of composite software.
Resumo:
Web threats are becoming a major issue for both governments and companies. Generally, web threats increased as much as 600% during last year (WebSense, 2013). This appears to be a significant issue, since many major businesses seem to provide these services. Denial of Service (DoS) attacks are one of the most significant web threats and generally their aim is to waste the resources of the target machine (Mirkovic & Reiher, 2004). Dis-tributed Denial of Service (DDoS) attacks are typically executed from many sources and can result in large traf-fic flows. During last year 11% of DDoS attacks were over 60 Gbps (Prolexic, 2013a). The DDoS attacks are usually performed from the large botnets, which are networks of remotely controlled computers. There is an increasing effort by governments and companies to shut down the botnets (Dittrich, 2012), which has lead the attackers to look for alternative DDoS attack methods. One of the techniques to which attackers are returning to is DDoS amplification attacks. Amplification attacks use intermediate devices called amplifiers in order to amplify the attacker's traffic. This work outlines an evaluation tool and evaluates an amplification attack based on the Trivial File Transfer Proto-col (TFTP). This attack could have amplification factor of approximately 60, which rates highly alongside other researched amplification attacks. This could be a substantial issue globally, due to the fact this protocol is used in approximately 599,600 publicly open TFTP servers. Mitigation methods to this threat have also been consid-ered and a variety of countermeasures are proposed. Effects of this attack on both amplifier and target were analysed based on the proposed metrics. While it has been reported that the breaching of TFTP would be possible (Schultz, 2013), this paper provides a complete methodology for the setup of the attack, and its verification.
Resumo:
As a management tool Similation Software deserves greater analysis from both an academic and industrial viewpoint. A comparative study of three packages was carried out from a 'first time' use approach. This allowed the ease of use and package features to be assessed using a simple theoretical benchmark manufacturing process. To back the use of these packages an objective survey on simulation use and package features was carried out within the manufacturing industry.This identified the use of simulation software, its' applicability and preception of user requirements thereby proposing an ideal package.
Resumo:
Objective: To develop sedation, pain, and agitation quality measures using process control methodology and evaluate their properties in clinical practice. Design: A Sedation Quality Assessment Tool was developed and validated to capture data for 12-hour periods of nursing care. Domains included pain/discomfort and sedation-agitation behaviors; sedative, analgesic, and neuromuscular blocking drug administration; ventilation status; and conditions potentially justifying deep sedation. Predefined sedation-related adverse events were recorded daily. Using an iterative process, algorithms were developed to describe the proportion of care periods with poor limb relaxation, poor ventilator synchronization, unnecessary deep sedation, agitation, and an overall optimum sedation metric. Proportion charts described processes over time (2 monthly intervals) for each ICU. The numbers of patients treated between sedation-related adverse events were described with G charts. Automated algorithms generated charts for 12 months of sequential data. Mean values for each process were calculated, and variation within and between ICUs explored qualitatively. Setting: Eight Scottish ICUs over a 12-month period. Patients: Mechanically ventilated patients. Interventions: None. Measurements and Main Results: The Sedation Quality Assessment Tool agitation-sedation domains correlated with the Richmond Sedation Agitation Scale score (Spearman [rho] = 0.75) and were reliable in clinician-clinician (weighted kappa; [kappa] = 0.66) and clinician-researcher ([kappa] = 0.82) comparisons. The limb movement domain had fair correlation with Behavioral Pain Scale ([rho] = 0.24) and was reliable in clinician-clinician ([kappa] = 0.58) and clinician-researcher ([kappa] = 0.45) comparisons. Ventilator synchronization correlated with Behavioral Pain Scale ([rho] = 0.54), and reliability in clinician-clinician ([kappa] = 0.29) and clinician-researcher ([kappa] = 0.42) comparisons was fair-moderate. Eight hundred twenty-five patients were enrolled (range, 59-235 across ICUs), providing 12,385 care periods for evaluation (range 655-3,481 across ICUs). The mean proportion of care periods with each quality metric varied between ICUs: excessive sedation 12-38%; agitation 4-17%; poor relaxation 13-21%; poor ventilator synchronization 8-17%; and overall optimum sedation 45-70%. Mean adverse event intervals ranged from 1.5 to 10.3 patients treated. The quality measures appeared relatively stable during the observation period. Conclusions: Process control methodology can be used to simultaneously monitor multiple aspects of pain-sedation-agitation management within ICUs. Variation within and between ICUs could be used as triggers to explore practice variation, improve quality, and monitor this over time
Resumo:
Barnes, D. P., Hardy, N. W., Lee, M. H., Orgill, C. H., Sharpe, K. A. I. A software development package for intelligent supervisory systems. In Proc. ACME Res. Conf., Nottingham, September 1988, pp. 4