879 resultados para bureaucratic requirements
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
The process of host cell invasion by Trypanosoma cruzi depends on parasite energy. What source of energy is used for that event is not known. To address this and other questions related to T. cruzi energy requirements and cell invasion, we analyzed metacyclic trypomastigote forms of the phylogenetically distant CL and G strains. For both strains, the nutritional stress experienced by cells starved for 24, 36, or 48 h in phosphate-buffered saline reduced the ATP content and the ability of the parasite to invade HeLa cells proportionally to the starvation time. Inhibition of ATP production by treating parasites with rotenone plus antimycin A also diminished the infectivity. Nutrient depletion did not alter the expression of gp82, the surface molecule that mediates CL strain internalization, but increased the expression of gp90, the negative regulator of cell invasion, in the G strain. When L-proline was given to metacyclic forms starved for 36 h, the ATP levels were restored to those of nonstarved controls for both strains. Glucose had no such effect, although this carbohydrate and L-proline were transported in similar fashions. Recovery of infectivity promoted by L-proline treatment of starved parasites was restricted to the CL strain. The profile of restoration of ATP content and gp82-mediated invasion capacity by L-proline treatment of starved Y-strain parasites was similar to that of the CL strain, whereas the Dm28 and Dm30 strains, whose infectivity is downregulated by gp90, behaved like the G strain. L-Proline was also found to increase the ability of the CL strain to traverse a gastric mucin layer, a property important for the establishment of T. cruzi infection by the oral route. Efficient translocation of parasites through gastric mucin toward the target epithelial cells in the stomach mucosa is an essential requirement for subsequent cell invasion. By relying on these closely associated ATP-driven processes, the metacyclic trypomastigotes effectively accomplish their internalization.
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.
Resumo:
Two-dimensional and 3D quantitative structure-activity relationships studies were performed on a series of diarylpyridines that acts as cannabinoid receptor ligands by means of hologram quantitative structure-activity relationships and comparative molecular field analysis methods. The quantitative structure-activity relationships models were built using a data set of 52 CB1 ligands that can be used as anti-obesity agents. Significant correlation coefficients (hologram quantitative structure-activity relationships: r 2 = 0.91, q 2 = 0.78; comparative molecular field analysis: r 2 = 0.98, q 2 = 0.77) were obtained, indicating the potential of these 2D and 3D models for untested compounds. The models were then used to predict the potency of an external test set, and the predicted (calculated) values are in good agreement with the experimental results. The final quantitative structure-activity relationships models, along with the information obtained from 2D contribution maps and 3D contour maps, obtained in this study are useful tools for the design of novel CB1 ligands with improved anti-obesity potency.
Resumo:
During the development of system requirements, software system specifications are often inconsistent. Inconsistencies may arise for different reasons, for example, when multiple conflicting viewpoints are embodied in the specification, or when the specification itself is at a transient stage of evolution. These inconsistencies cannot always be resolved immediately. As a result, we argue that a formal framework for the analysis of evolving specifications should be able to tolerate inconsistency by allowing reasoning in the presence of inconsistency without trivialisation, and circumvent inconsistency by enabling impact analyses of potential changes to be carried out. This paper shows how clustered belief revision can help in this process. Clustered belief revision allows for the grouping of requirements with similar functionality into clusters and the assignment of priorities between them. By analysing the result of a cluster, an engineer can either choose to rectify problems in the specification or to postpone the changes until more information becomes available.
Resumo:
In e-Science experiments, it is vital to record the experimental process for later use such as in interpreting results, verifying that the correct process took place or tracing where data came from. The process that led to some data is called the provenance of that data, and a provenance architecture is the software architecture for a system that will provide the necessary functionality to record, store and use process documentation. However, there has been little principled analysis of what is actually required of a provenance architecture, so it is impossible to determine the functionality they would ideally support. In this paper, we present use cases for a provenance architecture from current experiments in biology, chemistry, physics and computer science, and analyse the use cases to determine the technical requirements of a generic, technology and application-independent architecture. We propose an architecture that meets these requirements and evaluate a preliminary implementation by attempting to realise two of the use cases.
Resumo:
From where did this tweet originate? Was this quote from the New York Times modified? Daily, we rely on data from the Web but often it is difficult or impossible to determine where it came from or how it was produced. This lack of provenance is particularly evident when people and systems deal with Web information or with any environment where information comes from sources of varying quality. Provenance is not captured pervasively in information systems. There are major technical, social, and economic impediments that stand in the way of using provenance effectively. This paper synthesizes requirements for provenance on the Web for a number of dimensions focusing on three key aspects of provenance: the content of provenance, the management of provenance records, and the uses of provenance information. To illustrate these requirements, we use three synthesized scenarios that encompass provenance problems faced by Web users today.
Resumo:
A autonomia pessoal do servidor público, em seu agir na Administração Pública, é um dos pressupostos para a eficaz implementação de ações de gestão do conhecimento. Ela também é um anseio do trabalhador, sempre defendido em manifestações das mais diversas associações de classe. Contudo, ela esbarra em restrições políticas, legais, administrativas e culturais. Este trabalho, debruçado sobre fontes secundárias e teóricas, identificou a natureza da autonomia pessoal, suas modalidades, suas fontes, suas restrições, bem como sua possibilidade de desenvolvimento. O trabalho, de natureza teórica, foi desenvolvido por meio de interpretação transdisciplinar das fontes, em sua maior parte oriundas da literatura sociológica, administrativa, do direito e da filosofia. O conceito de autonomia é trabalhado inicialmente, seguido por sua primeira subdivisão em duas dimensões. Em seguida, a disciplina que a doutrina de Direito Administrativo brasileiro impõe à autonomia do servidor público é explorada e problematizada. Em seguida, é abordada a questão sob a visão sociológica, a partir do modelo burocrático ideal de Max Weber e das constatações de Michel Crozier. A relação entre a autonomia e as burocracias profissionais também é passada em revista. Por fim, a personalidade humana é apresentada como a fonte da autonomia, bem como sua justificação diante de doutrinas que a negam e atacam. Foram identificadas três dimensões da autonomia: substantiva, técnica e objetiva; bem como propostos caminhos para que, nas organizações públicas, essas dimensões possam florescer, dentro dos legítimos limitantes políticos, legais e administrativos identificados.
Resumo:
Esse é um dos primeiros trabalhos a endereçar o problema de avaliar o efeito do default para fins de alocação de capital no trading book em ações listadas. E, mais especificamente, para o mercado brasileiro. Esse problema surgiu em crises mais recentes e que acabaram fazendo com que os reguladores impusessem uma alocação de capital adicional para essas operações. Por essa razão o comitê de Basiléia introduziu uma nova métrica de risco, conhecida como Incremental Risk Charge. Essa medida de risco é basicamente um VaR de um ano com um intervalo de confiança de 99.9%. O IRC visa medir o efeito do default e das migrações de rating, para instrumentos do trading book. Nessa dissertação, o IRC está focado em ações e como consequência, não leva em consideração o efeito da mudança de rating. Além disso, o modelo utilizado para avaliar o risco de crédito para os emissores de ação foi o Moody’s KMV, que é baseado no modelo de Merton. O modelo foi utilizado para calcular a PD dos casos usados como exemplo nessa dissertação. Após calcular a PD, simulei os retornos por Monte Carlo após utilizar um PCA. Essa abordagem permitiu obter os retornos correlacionados para fazer a simulação de perdas do portfolio. Nesse caso, como estamos lidando com ações, o LGD foi mantido constante e o valor utilizado foi baseado nas especificações de basiléia. Os resultados obtidos para o IRC adaptado foram comparados com um VaR de 252 dias e com um intervalo de confiança de 99.9%. Isso permitiu concluir que o IRC é uma métrica de risco relevante e da mesma escala de uma VaR de 252 dias. Adicionalmente, o IRC adaptado foi capaz de antecipar os eventos de default. Todos os resultados foram baseados em portfolios compostos por ações do índice Bovespa.
Resumo:
The domain of Knowledge Discovery (KD) and Data Mining (DM) is of growing importance in a time where more and more data is produced and knowledge is one of the most precious assets. Having explored both the existing underlying theory, the results of the ongoing research in academia and the industry practices in the domain of KD and DM, we have found that this is a domain that still lacks some systematization. We also found that this systematization exists to a greater degree in the Software Engineering and Requirements Engineering domains, probably due to being more mature areas. We believe that it is possible to improve and facilitate the participation of enterprise stakeholders in the requirements engineering for KD projects by systematizing requirements engineering process for such projects. This will, in turn, result in more projects that end successfully, that is, with satisfied stakeholders, including in terms of time and budget constraints. With this in mind and based on all information found in the state-of-the art, we propose SysPRE - Systematized Process for Requirements Engineering in KD projects. We begin by proposing an encompassing generic description of the KD process, where the main focus is on the Requirements Engineering activities. This description is then used as a base for the application of the Design and Engineering Methodology for Organizations (DEMO) so that we can specify a formal ontology for this process. The resulting SysPRE ontology can serve as a base that can be used not only to make enterprises become aware of their own KD process and requirements engineering process in the KD projects, but also to improve such processes in reality, namely in terms of success rate.
Resumo:
The objective of this study was to evaluate the protein requirements for hand-rearing Blue-fronted Amazon parrots (Amazona aestiva). Forty hatchlings were fed semi-purified diets containing one of four (as-fed basis) protein levels: 13%, 18%, 23% and 28%. The experiment was carried out in a randomized block design with the initial weight of the nestling as the blocking factor and 10 parrots per protein level. Regression analysis was used to determine relationships between protein level and biometric measurements. The data indicated that 13% crude protein supported nestling growth with 18% being the minimum tested level required for maximum development. The optimal protein concentration for maximum weight gain was 24.4% (p = 0.08; r(2) = 0.25), tail length 23.7% (p = 0.09; r(2) = 0.19), wing length 23.0% (p = 0.07; r(2) = 0.17), tarsus length 21.3% (p = 0.06; r(2) = 0.10) and tarsus width 21.4% (p = 0.07; r(2) = 0.09). Tarsus measurements were larger in males (p < 0.05), indicating that sex must be considered when studying developing psittacines. These results were obtained using a highly digestible protein and a diet with moderate metabolizable energy levels.