993 resultados para Relational databases -- Design
Resumo:
Some examples from the book. Connolly, T. M. and C. E. Begg (2005). Database systems : a practical approach to design, implementation, and management. Harlow, Essex, England ; New York, Addison-Wesley.
Resumo:
A Web-based tool developed to automatically correct relational database schemas is presented. This tool has been integrated into a more general e-learning platform and is used to reinforce teaching and learning on database courses. This platform assigns to each student a set of database problems selected from a common repository. The student has to design a relational database schema and enter it into the system through a user friendly interface specifically designed for it. The correction tool corrects the design and shows detected errors. The student has the chance to correct them and send a new solution. These steps can be repeated as many times as required until a correct solution is obtained. Currently, this system is being used in different introductory database courses at the University of Girona with very promising results
Resumo:
As digital technologies become widely used in designing buildings and infrastructure, questions arise about their impacts on construction safety. This review explores relationships between construction safety and digital design practices with the aim of fostering and directing further research. It surveys state-of-the-art research on databases, virtual reality, geographic information systems, 4D CAD, building information modeling and sensing technologies, finding various digital tools for addressing safety issues in the construction phase, but few tools to support design for construction safety. It also considers a literature on safety critical, digital and design practices that raises a general concern about ‘mindlessness’ in the use of technologies, and has implications for the emerging research agenda around construction safety and digital design. Bringing these strands of literature together suggests new kinds of interventions, such as the development of tools and processes for using digital models to promote mindfulness through multi-party collaboration on safety
Resumo:
Purpose – Despite recent threats of economic contraction, China still offers attractive opportunities for foreign companies seeking to expand their business activities through joint venturing (JV) partnering entry strategies. Recent research has indicated a growing recognition of the importance of relational factors in JV partnering. The purpose of this paper is to build on recent research findings that identify critical relation success factors in JVs and explores these in the context of a Hong Kong-based civil aviation services company seeking to expand business activities in Greater China. Design/methodology/approach – While the extant management literature focuses primarily on factors relevant to the inter-partner relationship between partners in the formation stage of a joint venture, this research takes a dynamic stakeholder perspective in respect of the relevant relational factors over the evolution of a partnership. The research described in this paper is based on a case-based study that identifies and examines the relevance and importance of uniquely Chinese factors such as guanxi, renqing and mianzi in the specific context of a strategic partnering relationship. Findings – This phenomenological study provides empirical evidence of critical linkages of these to intrinsically Chinese notions of guanxi, mianzi and renqing – it links these to key strategic partnering success factors identified to be trust, conflict resolution, commitment and cooperation. This study thereby reinforces the importance of the uniquely Chinese relational context in cross-border JVs. Moreover, the research findings suggest that these factors underpin the dynamic bi-directional stakeholder relationship in a Sino-foreign strategic partnership. Originality/value – This study conceptually links the uniquely Chinese relational factors (guanxi, mianzi and renqing) to key success factors supporting the establishment of a strategic partnership in a Sino-foreign context; moreover, it contributes empirical evidence substantiating the proposed conceptual linkage.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Company X develops a laboratory information system (LIS) called System Y. The informationsystem has a two-tier database architecture consisting of a production database and a historicaldatabase. A database constitutes the backbone of a IS, which makes the design of the databasevery important. A poorly designed database can cause major problems within an organization.The two databases in System Y are poorly modeled, particularly the historical database. Thecause of the poor modeling was unclear concepts. The unclear concepts have remained in thedatabase and in the company organization and caused a general confusion of concepts. The splitdatabase architecture itself has evolved into a bottleneck and is the cause of many problemsduring the development of System Y.Company X investigates the possibility of integrating the historical database with the productiondatabase. The goal of our thesis is to conduct a consequence analysis of such integration andwhat the effects would be on System Y, and to create a new design for the integrated database.We will also examine and describe the practical effects of confusion of concepts for a databaseconceptual design.To achieve the goal of the thesis, five different method steps have been performed: a preliminarystudy of the organization, a change analysis, a consequence analysis and an investigation of theconceptual design of the database. These method steps have helped identify changes necessaryfor the organization, a new design proposal for an integrated database, the impact of theproposed design and a number of effects of confusion for the database.
Resumo:
Introduction: Studies have shown that having a preterm infant may cause stress and powerlessness for parents. It is important to support parents around the feeding situation, and that the Neonatal Intensive Care Unit (NICU) has appropriate space and place to help the family to bond to each other. For the healthcare professionals it is important to promote skin-to-skin contact and breastfeeding; particularly for preterm infants. There are many studies on parent’s experiences of NICUs and a few studies on parent’s experiences of feeding their infant in the NICU. Objective: The objective of this study was to explore parents experiences of feeding their infant in the NICU. Design: The study was conducted using an ethnographic design. Results: A global theme of ‘The journey in feeding’ was developed from four organising themes: ‘Ways of infant feeding’; ‘Environmental influences’; ‘Relationships’ and ‘Emotional factors’. These themes illustrate the challenges mothers reported with different methods of feeding. The environment had a big impact on parent’s experiences of infant feeding. Some mothers felt that breastfeeding seemed unnatural because their infant was so tiny but breastfeeding and skin-to-skin contact helped them to bond to their infant. The mothers thought it was difficult to keep up with the milk production by only pumping. Routines were not inviting parents to find their own rhythm. They also felt stressed about the weighing. Healthcare professionals had positive and negative influences on the parents. Conclusions: This study demonstrates that while all parents expressed the wish to breastfeed, their ‘journey in feeding’ was highly influenced by method of feeding, environmental, relational and emotional factors. The general focus upon routines and assessing milk intake generated anxiety and reduced relationality. Midwives and neonatal nurses need to ensure that they emphasise and support the relational aspects of parenting and avoid over-emphasising milk intake and associated progress of the infant
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
This paper presents a technique to share the data stored in an object-oriented database aimed at designing environments. This technique shares data between two related databases, called the Original and Product databases, and is composed of three processes: data separation, evolution and integration. Whenever a block of data needs to be shared, it is spread into both databases, resulting in a block on the original database, and another into the Product database, with special links between them controlled by the Object Manager. These blocks do not need to be maintained identical during the evolution phase of the sharing process. Six types of links were defined, and by choosing one, the designer control the evolution and reintegration of the block in both databases. This process uses the composite object concept as the unit of control. The presented concepts can be applied to any data model with support to composite objects.
Resumo:
Background: The functional and structural characterisation of enzymes that belong to microbial metabolic pathways is very important for structure-based drug design. The main interest in studying shikimate pathway enzymes involves the fact that they are essential for bacteria but do not occur in humans, making them selective targets for design of drugs that do not directly impact humans.Description: The ShiKimate Pathway DataBase (SKPDB) is a relational database applied to the study of shikimate pathway enzymes in microorganisms and plants. The current database is updated regularly with the addition of new data; there are currently 8902 enzymes of the shikimate pathway from different sources. The database contains extensive information on each enzyme, including detailed descriptions about sequence, references, and structural and functional studies. All files (primary sequence, atomic coordinates and quality scores) are available for downloading. The modeled structures can be viewed using the Jmol program.Conclusions: The SKPDB provides a large number of structural models to be used in docking simulations, virtual screening initiatives and drug design. It is freely accessible at http://lsbzix.rc.unesp.br/skpdb/. © 2010 Arcuri et al; licensee BioMed Central Ltd.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Major depressive disorder (MDD) trials - investigating either non-pharmacological or pharmacological interventions - have shown mixed results. Many reasons explain this heterogeneity, but one that stands out is the trial design due to specific challenges in the field. We aimed therefore to review the methodology of non-invasive brain stimulation (NIBS) trials and provide a framework to improve clinical trial design. We performed a systematic review for randomized, controlled MDD trials whose intervention was transcranial magnetic stimulation (rTMS) or transcranial direct current stimulation (tDCS) in MEDLINE and other databases from April 2002 to April 2008. We created an unstructured checklist based on CONSORT guidelines to extract items such as power analysis, sham method, blinding assessment, allocation concealment, operational criteria used for MDD, definition of refractory depression and primary study hypotheses. Thirty-one studies were included. We found that the main methodological issues can be divided in to three groups: (1) issues related to phase II/small trials, (2) issues related to MDD trials and, (3) specific issues of NIBS studies. Taken together, they can threaten study validity and lead to inconclusive results. Feasible solutions include: estimating the sample size a priori; measuring the degree of refractoriness of the subjects; specifying the primary hypothesis and statistical tests; controlling predictor variables through stratification randomization methods or using strict eligibility criteria; adjusting the study design to the target population; using adaptive designs and exploring NIBS efficacy employing biological markers. In conclusion, our study summarizes the main methodological issues of NIBS trials and proposes a number of alternatives to manage them. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
The curriculum of the Bucknell University Chemical Engineering Department includes a required senior year capstone course titled Process Engineering, with an emphasis on process design. For the past ten years library research has been a significant component of the coursework, and students working in teams meet with the librarian throughout the semester to explore a wide variety of information resources required for their project. The assignment has been the same from 1989 to 1999. Teams of students are responsible for designing a safe, efficient, and profitable process for the dehydrogenation of ethylbenzene to styrene monomer. A series of written reports on their chosen process design is a significant course outcome. While the assignment and the specific chemical technology have not changed radically in the past decade, the process of research and discovery has evolved considerably. This paper describes the solutions offered in 1989 to meet the information needs of the chemical engineering students at Bucknell University, and the evolution in research brought about by online databases, electronic journals, and the Internet, making the process of discovery a completely different experience in 1999.