876 resultados para requirements traceability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an application of Social Network Analysis methods for identification of knowledge demands in public organisations. Affiliation networks established in a postgraduate programme were analysed. The course was executed in a distance education mode and its students worked on public agencies. Relations established among course participants were mediated through a virtual learning environment using Moodle. Data available in Moodle may be extracted using knowledge discovery in databases techniques. Potential degrees of closeness existing among different organisations and among researched subjects were assessed. This suggests how organisations could cooperate for knowledge management and also how to identify their common interests. The study points out that closeness among organisations and research topics may be assessed through affiliation networks. This opens up opportunities for applying knowledge management between organisations and creating communities of practice. Concepts of knowledge management and social network analysis provide the theoretical and methodological basis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of host cell invasion by Trypanosoma cruzi depends on parasite energy. What source of energy is used for that event is not known. To address this and other questions related to T. cruzi energy requirements and cell invasion, we analyzed metacyclic trypomastigote forms of the phylogenetically distant CL and G strains. For both strains, the nutritional stress experienced by cells starved for 24, 36, or 48 h in phosphate-buffered saline reduced the ATP content and the ability of the parasite to invade HeLa cells proportionally to the starvation time. Inhibition of ATP production by treating parasites with rotenone plus antimycin A also diminished the infectivity. Nutrient depletion did not alter the expression of gp82, the surface molecule that mediates CL strain internalization, but increased the expression of gp90, the negative regulator of cell invasion, in the G strain. When L-proline was given to metacyclic forms starved for 36 h, the ATP levels were restored to those of nonstarved controls for both strains. Glucose had no such effect, although this carbohydrate and L-proline were transported in similar fashions. Recovery of infectivity promoted by L-proline treatment of starved parasites was restricted to the CL strain. The profile of restoration of ATP content and gp82-mediated invasion capacity by L-proline treatment of starved Y-strain parasites was similar to that of the CL strain, whereas the Dm28 and Dm30 strains, whose infectivity is downregulated by gp90, behaved like the G strain. L-Proline was also found to increase the ability of the CL strain to traverse a gastric mucin layer, a property important for the establishment of T. cruzi infection by the oral route. Efficient translocation of parasites through gastric mucin toward the target epithelial cells in the stomach mucosa is an essential requirement for subsequent cell invasion. By relying on these closely associated ATP-driven processes, the metacyclic trypomastigotes effectively accomplish their internalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two-dimensional and 3D quantitative structure-activity relationships studies were performed on a series of diarylpyridines that acts as cannabinoid receptor ligands by means of hologram quantitative structure-activity relationships and comparative molecular field analysis methods. The quantitative structure-activity relationships models were built using a data set of 52 CB1 ligands that can be used as anti-obesity agents. Significant correlation coefficients (hologram quantitative structure-activity relationships: r 2 = 0.91, q 2 = 0.78; comparative molecular field analysis: r 2 = 0.98, q 2 = 0.77) were obtained, indicating the potential of these 2D and 3D models for untested compounds. The models were then used to predict the potency of an external test set, and the predicted (calculated) values are in good agreement with the experimental results. The final quantitative structure-activity relationships models, along with the information obtained from 2D contribution maps and 3D contour maps, obtained in this study are useful tools for the design of novel CB1 ligands with improved anti-obesity potency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the development of system requirements, software system specifications are often inconsistent. Inconsistencies may arise for different reasons, for example, when multiple conflicting viewpoints are embodied in the specification, or when the specification itself is at a transient stage of evolution. These inconsistencies cannot always be resolved immediately. As a result, we argue that a formal framework for the analysis of evolving specifications should be able to tolerate inconsistency by allowing reasoning in the presence of inconsistency without trivialisation, and circumvent inconsistency by enabling impact analyses of potential changes to be carried out. This paper shows how clustered belief revision can help in this process. Clustered belief revision allows for the grouping of requirements with similar functionality into clusters and the assignment of priorities between them. By analysing the result of a cluster, an engineer can either choose to rectify problems in the specification or to postpone the changes until more information becomes available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In e-Science experiments, it is vital to record the experimental process for later use such as in interpreting results, verifying that the correct process took place or tracing where data came from. The process that led to some data is called the provenance of that data, and a provenance architecture is the software architecture for a system that will provide the necessary functionality to record, store and use process documentation. However, there has been little principled analysis of what is actually required of a provenance architecture, so it is impossible to determine the functionality they would ideally support. In this paper, we present use cases for a provenance architecture from current experiments in biology, chemistry, physics and computer science, and analyse the use cases to determine the technical requirements of a generic, technology and application-independent architecture. We propose an architecture that meets these requirements and evaluate a preliminary implementation by attempting to realise two of the use cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From where did this tweet originate? Was this quote from the New York Times modified? Daily, we rely on data from the Web but often it is difficult or impossible to determine where it came from or how it was produced. This lack of provenance is particularly evident when people and systems deal with Web information or with any environment where information comes from sources of varying quality. Provenance is not captured pervasively in information systems. There are major technical, social, and economic impediments that stand in the way of using provenance effectively. This paper synthesizes requirements for provenance on the Web for a number of dimensions focusing on three key aspects of provenance: the content of provenance, the management of provenance records, and the uses of provenance information. To illustrate these requirements, we use three synthesized scenarios that encompass provenance problems faced by Web users today.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esse é um dos primeiros trabalhos a endereçar o problema de avaliar o efeito do default para fins de alocação de capital no trading book em ações listadas. E, mais especificamente, para o mercado brasileiro. Esse problema surgiu em crises mais recentes e que acabaram fazendo com que os reguladores impusessem uma alocação de capital adicional para essas operações. Por essa razão o comitê de Basiléia introduziu uma nova métrica de risco, conhecida como Incremental Risk Charge. Essa medida de risco é basicamente um VaR de um ano com um intervalo de confiança de 99.9%. O IRC visa medir o efeito do default e das migrações de rating, para instrumentos do trading book. Nessa dissertação, o IRC está focado em ações e como consequência, não leva em consideração o efeito da mudança de rating. Além disso, o modelo utilizado para avaliar o risco de crédito para os emissores de ação foi o Moody’s KMV, que é baseado no modelo de Merton. O modelo foi utilizado para calcular a PD dos casos usados como exemplo nessa dissertação. Após calcular a PD, simulei os retornos por Monte Carlo após utilizar um PCA. Essa abordagem permitiu obter os retornos correlacionados para fazer a simulação de perdas do portfolio. Nesse caso, como estamos lidando com ações, o LGD foi mantido constante e o valor utilizado foi baseado nas especificações de basiléia. Os resultados obtidos para o IRC adaptado foram comparados com um VaR de 252 dias e com um intervalo de confiança de 99.9%. Isso permitiu concluir que o IRC é uma métrica de risco relevante e da mesma escala de uma VaR de 252 dias. Adicionalmente, o IRC adaptado foi capaz de antecipar os eventos de default. Todos os resultados foram baseados em portfolios compostos por ações do índice Bovespa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following document proposes a traceability solution for model-driven development. There as been already previous work done in this area, but so far there has not been yet any standardized way for exchanging traceability information, thus the goal of this project developed and documented here is not to automatize the traceability process but to provide an approach to achieve traceability that follows OMG standards, making traceability information exchangeable between tools that follow the same standards. As such, we propose a traceability meta-model as an extension of MetaObject Facility (MOF)1. Using MetaSketch2 modeling language workbench, we present a modeling language for traceability information. This traceability information then can be used for tool cooperation. Using Meta.Tracer (our tool developed for this thesis), we enable the users to establish traceability relationships between different traceability elements and offer a visualization for the traceability information. We then demonstrate the benefits of using a traceability tool on a software development life cycle using a case study. We finalize by commenting on the work developed.