935 resultados para Functional Requirements for Authority Data (FRAD)
Resumo:
Dissertação de Natureza Científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
In metazoans, bone morphogenetic proteins (BMPS) direct a myriad of developmental and adult homeostatic evens through their heterotetrameric type I and type II receptor complexes. We examined 3 existing and 12 newly generated mutations in the Drosophila type I receptor gene, saxophone (sax), the ortholog of the human Activin Receptor-Like. Kinasel and -2 (ALK1/ACVR1 and ALK2/ACVR1) genes. Our genetic analyses identified two distinct classes of sax alleles. The first class consists of homozygous viable gain-of-function (GOF) alleles that exhibit (1) synthetic lethality in combination with mutations in BMP pathway components, and (2) significant maternal effect lethality that can be rescued by an increased dosage of the BMP encoding gene, dpp(+). In contrast, the second class consists of alleles that are recessive lethal and do not exhibit lethality in combination with mutations in other BMP pathway components. The alleles in this second class are clearly loss-of-function (LOF) with both complete and partial loss-of-function mutations represented. We find that one allele in the second class of recessive lethals exhibits dominant-negative behavior, albeit distinct from the GOF activity of the first class of viable alleles. On the basis of the fact that the first class of viable alleles can be reverted to lethality and on our ability to independently generate recessive lethal sat mutations, our analysis demonstrates that sax is an essential gene. Consistent with this conclusion, we find that a normal sax transcript is produced by sax(P), a viable allele previously reported to be mill, and that this allele can be reverted to lethality. Interestingly, we determine that two mutations in the first: class of sax alleles show the same amino acid substitutions as mutations in the human receptors ALK1/ACVR1-1 and ACVR1/ALK2, responsible for cases of hereditary hemorrhagic telangiectasia type 2 (HHT2) and fibrodysplasia ossificans progressiva (FOP), respectively. Finally, the data presented here identify different functional requirements for the Sax receptor, support the proposal that Sax participates in a heteromeric receptor complex, and provide a mechanistic framework for future investigations into disease states that arise from defects in BMP/TGF-beta signaling.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
Thesis to obtain the Master of Science Degree in Computer Science and Engineering
Resumo:
Tese de Doutoramento em Tecnologias e Sistemas de Informação
Resumo:
In this paper we introduce a highly efficient reversible data hiding system. It is based on dividing the image into tiles and shifting the histograms of each image tile between its minimum and maximum frequency. Data are then inserted at the pixel level with the largest frequency to maximize data hiding capacity. It exploits the special properties of medical images, where the histogram of their nonoverlapping image tiles mostly peak around some gray values and the rest of the spectrum is mainlyempty. The zeros (or minima) and peaks (maxima) of the histograms of the image tiles are then relocated to embed the data. The grey values of some pixels are therefore modified.High capacity, high fidelity, reversibility and multiple data insertions are the key requirements of data hiding in medical images. We show how histograms of image tiles of medical images can be exploited to achieve these requirements. Compared with data hiding method applied to the whole image, our scheme can result in 30%-200% capacity improvement and still with better image quality, depending on the medical image content. Additional advantages of the proposed method include hiding data in the regions of non-interest and better exploitation of spatial masking.
Resumo:
Työn tarkoituksena oli kehittää yrityksen nimikkeiden hallintaa osana uuden toiminnan-ohjausjärjestelmän käyttöönottoprojektia. Tavoitteena oli toiminnanohjausjärjestelmän ja suunnittelun tietojärjestelmien välisen tuotetiedon löydettävyyden ja käytettävyyden kehittäminen tuotteen tilaus –toimitus prosessin näkökulmasta. Työn ongelmaa tarkasteltiin ja analysoitiin tuotetiedon hallintaan ja toiminnanohjausjärjestelmiin liittyvän kirjallisuuden pohjalta. Aineistona käytettiin myös aihepiiriin liittyneitä aiempia tutkimuksia. Diplomityön tuloksena saatiin toiminnanohjausjärjestelmään standardoitu nimikkeistö ostettavista materiaaleista ja sekä itse valmistettavista tuotteista. Suunnittelun ja toiminnanohjausjärjestelmän välinen nimiketietojen siirto mahdollistettiin määrittämällä yhteydet ERP- ja CAD -järjestelmien välille. Järjestelmäintegraation ansiosta suunnittelussa muodostettuja tuoterakenteita voidaan käyttää hyväksi ERP –järjestelmän eri prosesseissa, kuten esimerkiksi tuotannossa ja hankinnoissa. Tehokkaan tuotetiedon hallinnan perusta on toimiva ja standardoitu nimikkeistö. Onnistuneen tiedonhallinnan edellytyksiä ovat tiedon yksikäsitteisyys, ajantasaisuus ja saatavuus. Olemassa olevaa tietoa tulee pystyä käyttämään mahdollisimman tehokkaasti hyväksi
Resumo:
Background: None of the HIV T-cell vaccine candidates that have reached advanced clinical testing have been able to induce protective T cell immunity. A major reason for these failures may have been suboptimal T cell immunogen designs. Methods: To overcome this problem, we used a novel immunogen design approach that is based on functional T cell response data from more than 1,000 HIV-1 clade B and C infected individuals and which aims to direct the T cell response to the most vulnerable sites of HIV-1. Results: Our approach identified 16 regions in Gag, Pol, Vif and Nef that were relatively conserved and predominantly targeted by individuals with reduced viral loads. These regions formed the basis of the HIVACAT T-cell Immunogen (HTI) sequence which is 529 amino acids in length, includes more than 50 optimally defined CD4+ and CD8+ T-cell epitopes restricted by a wide range of HLA class I and II molecules and covers viral sites where mutations led to a dramatic reduction in viral replicative fitness. In both, C57BL/6 mice and Indian rhesus macaques immunized with an HTI-expressing DNA plasmid (DNA.HTI) induced broad and balanced T-cell responses to several segments within Gag, Pol, and Vif. DNA.HTI induced robust CD4+ and CD8+ T cell responses that were increased by a booster vaccination using modified virus Ankara (MVA.HTI), expanding the DNA.HTI induced response to up to 3.2% IFN-γ T-cells in macaques. HTI-specific T cells showed a central and effector memory phenotype with a significant fraction of the IFN-γ+ CD8+ T cells being Granzyme B+ and able to degranulate (CD107a+). Conclusions: These data demonstrate the immunogenicity of a novel HIV-1 T cell vaccine concept that induced broadly balanced responses to vulnerable sites of HIV-1 while avoiding the induction of responses to potential decoy targets that may divert effective T-cell responses towards variable and less protective viral determinants.
Resumo:
In a networked business environment the visibility requirements towards the supply operations and customer interface has become tighter. In order to meet those requirements the master data of case company is seen as an enabler. However the current state of master data and its quality are not seen good enough to meet those requirements. In this thesis the target of research was to develop a process for managing master data quality as a continuous process and find solutions to cleanse the current customer and supplier data to meet the quality requirements defined in that process. Based on the theory of Master Data Management and data cleansing, small amount of master data was analyzed and cleansed using one commercial data cleansing solution available on the market. This was conducted in cooperation with the vendor as a proof of concept. In the proof of concept the cleansing solution’s applicability to improve the quality of current master data was proved. Based on those findings and the theory of data management the recommendations and proposals for improving the quality of data were given. In the results was also discovered that the biggest reasons for poor data quality is the lack of data governance in the company, and the current master data solutions and its restrictions.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.
Resumo:
The great diversity in the architecture of biomedical devices, coupled with their different communication protocols, has hindered the implementation of systems that need to make access to these devices. Given these differences, the need arises to provide access to such a transparent manner. In this sense, this paper proposes an embedded architecture, service-oriented, for access to biomedical devices, as a way to abstract the mechanism for writing and reading data on these devices, thereby contributing to the increase in quality and productivity of biomedical systems so as to enable that, the focus of the development team of biomedical software, is almost exclusively directed to its functional requirements
Resumo:
This thesis presents ⇡SOD-M (Policy-based Service Oriented Development Methodology), a methodology for modeling reliable service-based applications using policies. It proposes a model driven method with: (i) a set of meta-models for representing non-functional constraints associated to service-based applications, starting from an use case model until a service composition model; (ii) a platform providing guidelines for expressing the composition and the policies; (iii) model-to-model and model-to-text transformation rules for semi-automatizing the implementation of reliable service-based applications; and (iv) an environment that implements these meta-models and rules, and enables the application of ⇡SOD-M. This thesis also presents a classification and nomenclature for non-functional requirements for developing service-oriented applications. Our approach is intended to add value to the development of service-oriented applications that have quality requirements needs. This work uses concepts from the service-oriented development, non-functional requirements design and model-driven delevopment areas to propose a solution that minimizes the problem of reliable service modeling. Some examples are developed as proof of concepts
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)